Learning from Authoritative Security Experiment Results
Paper and Submission Guidelines
- Begin with a structured abstract that accurately summarizes the whole paper. It should be 150-350 words in length and include the following elements: background, aim, method, results, and conclusions. (See below)
- Include, at a minimum, background, aim, method, results, and conclusion sections.
- Provide details sufficient that the validity of the experiment(s) conducted can be verified and the experiments can be repeated by peers.
Papers not meeting these criteria will be rejected without review, and no deadline extensions will be granted for reformatting.
Abstracts should contain concise statements that tell the whole story of the study, presented in a consistent structure that facilitates quick assessment as to whether or not the paper may meet the reader’s needs and warrant reading the full paper. Essential elements of structured abstracts are background, aim, method, results, and conclusions:
- Background. State the background and context of the work described in the paper.
- Aim. State the research question, objective, or purpose of the work in the paper.
- Method. Briefly summarize the method used to conduct the research, including the subjects, procedure, data, and analytical method.
- Results. State the outcome of the research using measures appropriate for the study conducted. Results are essentially the numbers.
- Conclusions. State the lessons learned as a result of the study and recommendations for future work. The conclusions are the “so what” of the study.
By using this format for an abstract, the author has a good structure not only for his or her paper but also for creating slides to present the work.
Here is an example abstract from the below citation (140 words) of a LASER 2012 paper:
Kevin S. Killourhy and Roy A. Maxion. 2012. Free vs. transcribed text for keystroke-dynamics evaluations. In Proc. of the 2012 Workshop on Learning from Authoritative Security Experiment Results (LASER ‘12). ACM, New York, NY, USA, 1-8.
Background. One revolutionary application of keystroke dynamics is continuous reauthentication: confirming a typist’s identity during normal computer usage without interrupting the user.
Aim. In laboratory evaluations, subjects are typically given transcription tasks rather than free composition (e.g., copying rather than composing text), because transcription is easier for subjects. This work establishes whether free and transcribed text produce equivalent evaluation results.
Method. Twenty subjects completed comparable transcription and free-composition tasks; two keystroke-dynamics classifiers were implemented; each classifier was evaluated using both the free-composition and transcription samples.
Results. Transcription hold and keydown-keydown times are 2–3 milliseconds slower than free-text features; tests showed these effects to be significant. However, these effects did not significantly change evaluation results.
Conclusions. The additional difficulty of collecting freely composed text from subjects seems unnecessary; researchers are encouraged to continue using transcription tasks.