Commit 6571ae71 authored by Erik Poll's avatar Erik Poll
Browse files

fixed some typos

parent 7d54b2dc
......@@ -3,7 +3,7 @@
We have combined model learning with abstraction techniques to infer models of the OpenSSH, Bitvise and DropBear SSH server implementations. We have also
formalized several security and functional properties drawn from the SSH RFC specifications. We have verified these
properties on the learned models using model checking and have uncovered several minor inconsistencies, though crucially, the security critical properties were met
by all implementation.
by all implementations.
Abstraction was provided by a {\dmapper} component placed between the
{\dlearner} and the {\dsut}. The {\dmapper} was constructed from an
......@@ -17,17 +17,18 @@ explore most aforementioned behavior.
There were several challenges encountered. Firstly, building a {\dmapper} presented a considerable technical challenge, as it required re-structuring of an actual
SSH implementation. Secondly, because we used classical learning algorithms, we had to ensure that the abstracted implementation behaved
like a deterministic Mealy Machine. Herein, time induced non-determinism was difficult to eliminate. Buffering also presented problems,
like a deterministic Mealy Machine. Here time-induced non-determinism was difficult to eliminate. Buffering also presented problems,
leading to a considerable increase in the number of states. Moreover, the systems analyzed were relatively slow, which meant learning took
several days\marginpar{\tiny Erik: For a single server, right??}. This was compounded by the size of the learning alphabet, and it forced us into using a reduced alphabet for two of the analyzed implementations.
Limitations of the work, hence elements for future work, are several. First of all, the {\dmapper} was not formalized, unlike in~\cite{TCP2016}, thus we did not
Limitations of the work, hence possibilities for future work, are several. First of all, the {\dmapper} was not formalized, unlike in~\cite{TCP2016}, thus we did not
produce a concretization of the abstract models. Consequently, model checking results cannot be fully transferred to the actual implementations. Formal definition
of the mapper and concretization of the learned models would tackle this. The {\dmapper} also caused considerable redundancy in the learned models, re-tweaking the abstractions used, in particular those for managing channels, could alleviate this problem while also improving learning times. This in turn would facilitate learning using expanded alphabets instead of resorting to restricted alphabets.
Furthermore, the {\dmapper} abstraction could be refined, so more insight into the implementation is gained. In particular, parameters
Furthermore, the {\dmapper} abstraction could be refined, to give more
insight into the implementations. In particular, parameters,
such as the session identifier or data sent over channels, could be extracted from the {\dmapper} and potentially handled by existing Register Automata learners\cite{ralib2015,tomte2015}. These learners
can infer systems with parameterized alphabets, state variables and simple operations on data. Finally, we had to eliminate any timing related behavior, as it could not be handled by the classical learners used. To that end, preliminary work on learning timed automata\cite{GrinchteinJL10} could be leveraged .
can infer systems with parameterized alphabets, state variables and simple operations on data. Finally, we had to eliminate any timing-related behavior, as it could not be handled by the classical learners used. To that end, preliminary work on learning timed automata\cite{GrinchteinJL10} could be leveraged.
Despite these limitations, our work provides a compelling application of learning and model checking in a security setting, on a widely used protocol. We hope this lays
some more groundwork for further case studies, as well as advances learning techniques.
some more groundwork for further case studies, as well as advances in learning techniques.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment