Roy Maxion

Roy Maxion

Research Professor

Office: 8107 Gates & Hillman Centers


Phone: (412) 268-7556

RESEARCH GOALS. Our ambition is to design and build computer systems that are safe and robust against various kinds of faults, including malicious faults from information-warfare and insider/masquerader attacks, as well as from unanticipated random error and what many would interpret as user bone-headedness. We are trying to build systems that work reliably, all the time, for everyone ... and to understand what makes some systems unreliable, and what can be done about it.

Specific areas of interest are:

KEYSTROKE DYNAMICS AND FORENSICS. Can users be identified on the basis of their typing rhythms and styles? Keystroke timing can be monitored and used as a biometric in much the same way that handwriting and fingerprints have been used. If we use machine-learning classifiers, profilers and anomaly detectors, how reliable can the identification process be? What stimuli should users type? Does it matter if everyone knows your password, because no one will be able to type it like you do? Does it matter what the password is, or how long it is? Can snippets of email or other text be used to determine that the authorized user (and not someone else) is typing at the terminal? Can keystroke patterns be used in two-factor authentication schemes? What if a machine identifies a user, with the consequence that someone is accused of a crime and ends up in court - what are the forensic properties of keystroke analysis? What kinds of experimental methodologies are required for answering these kinds of questions with high confidence? These and other issues are being addressed in our keystroke project.

MASQUERADE DETECTION. A masquerader is someone who pretends to be another user while invading the target user's accounts, directories, or files. This project is building systems that will detect the activities of a masquerader by determining that a user's activities violate a profile developed for that user. Profiling is based on various machine-learning and classification techniques.

SYNTHETIC-DATA ENVIRONMENT / FAULT INJECTION. How do we gain confidence in a system's ability to detect failures, anomalies or performance perturbations? One method is by synthetic fault injection. This project's goal is to build a synthetic environment that can be used for validating algorithms for fault/anomaly/intrusion detection. It will be able to replicate environmental conditions faithfully and repeatably, and will be easy to use for both experts and novices.

DEPENDABILITY/ASSURANCE CASES. We are exploring the use of formal argumentation to support various claims of system dependability. One example is justifying safety claims for fly-by-wire or drive-by-wire systems; another is justifying claims that a fault-diagnosis system will handle all unanticipated faults; a third is that a biometric system can reliably discriminate among all its subjects. We may say that a system is safe or dependable or secure, but how do we muster the evidence to show it? And once the evidence is in hand, how do we structure and assess it to support the claims that are being made? The process is similar to presenting evidence in a jury trial, and some of the work involves liaison with trial attorneys. In the future, highly automated decision-making systems will need to construct arguments and gather evidence autonomously to support the correctness of their decisions, and we seek to lay the foundations for that.

SECURITY METRICS. You can't manage what you can't measure, and so far there have been no useful metrics for security that enable us to answer the following types of important questions: How secure is this system? Is System A more secure than System B? How much money/effort will it take to secure a system to a certain level? What is the risk that an attack will penetrate my system? What is the risk of a breach of availability, integrity or confidentiality on my system? Creating reliable metrics and measures to help answer such questions is of paramount importance to the security community, and we are actively engaged in this undertaking.

PERFORMANCE-SHAPING FACTORS. Stress or fatigue can affect your performance on almost any task. There are also elements of the task itself that influence performance; for example, the peculiarities of a particular programming language may induce programmers to make certain kinds of mistakes, or the design of a user interface may induce user error. We are interested in identifying such performance-shaping factors, measuring their effects on human/computer error, and finding ways to change computing environments so that errors are committed less frequently, thereby making computing safer, more secure and more reliable. Two examples that beg for attention are: Why are most coding mistakes in the exception-handling routines, as opposed to other places in the code? Why do users and administrators make so many mistakes configuring systems such as routers, servers, firewalls, file protections, and encrypted email? Mistakes can introduce serious security and performance vulnerabilities into systems, so it's worth while to determine what factors influence human performance, particularly when that performance is flawed.


PROJECTS: Performance-shaping Factors; Fault Tolerance; Experiments in Cyberspace; Fingerprints in Cyberspace