learning_results.tex 3.92 KB
Newer Older
Paul Fiterau Brostean's avatar
Paul Fiterau Brostean committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
\section{Learning results} \label{sec:result}
\newcommand{\dk}{\emph{k}}
We use the setup described in Section~\ref{sec:setup} to learn models for OpenSSH, BitVise and DropBear SSH server implementations. 
OpenSSH represents the focal point, as it is the most popular implementation of SSH (with over 80 percent of market share in 2008) 
and also, the default server for many UNIX-based systems. DropBear is an alternative to OpenSSH designed for low resource
systems. BitVise is a well known proprietary Windows-only SSH implementation. 

In our experimental setup, the {\dlearner} and {\dmapper} were running in a Linux Virtual Machine. OpenSSH and DropBear were 
learned over a local connection, whereas BitVise was learned over a virtual connection with the Windows host machine. 
Certain arrangements had to be made including the setting of timing parameters to fit each implementation.

OpenSSH was learned using a full alphabet, whereas DropBear and BitVise were learned using a reduced alphabet. Both versions of
the alphabets are described in Subsection~\ref{sec:alphabet}. For testing, we used random and exhaustive variants of testing algorithm described in 
\cite{SMJV15}, which generate efficient test suites. Tests generated comprise an access sequence, a middle section of length {\dk} and a 
distinguishing sequence. The exhaustive variant for a set {\dk}, generates tests for all possible middle sections and all states. Passing all tests provides some notion of confidence,
namely, that the learned model is correct unless the (unknown) model of the implementation has at least {\dk} more states. The random variant produces tests
with randomly generated middle sections. No formal confidence is provided, but past experience shows this to be more effective at finding counterexamples since {\dk}
can be set to higher values. We executed a random test suite with {\dk} of 4 comprising 40000 tests for OpenSSH, and 20000 tests for BitVise and DropBear. 
We then ran an exhaustive test suite with {\dk} of 2 for for all implementations.

Table~\ref{tab:experiments} describes the exact versions of the systems analyzed together with statistics on learning and testing, namely:
(1) the number of states in the learned model, (2) the number of hypotheses built during the learning process and (3) the total number of 
learning and test queries run. 


 \begin{table}[!ht]
\centering
\scriptsize
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
{\centering{\textbf{SUT}}} & \textbf{States} & \textbf{Hypotheses }  & \textbf{Num. Queries} \\ \hline  %& \textbf{Tests to last Hyp.}			& \textbf{Tests on last Hyp.} \\ \hline
OpenSSH 6.9p1-2            & 31              & 4               			 & tbc                    \\ %& 1322      						& 50243         \\
BitVise 7.23               & 65              & 15              			 & tbc                    \\ %& 9549   							& 65040         \\
DropBear v2014.65          & 17              & 2               			 & tbc                    \\ %& 15268  							& 56174        \\
\end{tabular}
\caption{Statistics for learning experiments}
\label{tab:experiments}
\end{table}


The large number of states is down to several reasons. First of all, some systems exhibited buffering behavior. In particular, BitVise would queue
responses for inputs sent during a key re-exchange and would deliver them all once the exchange was done. 
A considerable number of states were added due to {\dmapper} generated outputs such as \textsl{ch\_none} or \textsl{ch\_max}, outputs which signal that no channel is open or 
that the maximum number of channels have been opened. To give a concrete example, the {\dmapper} on every \textsl{ch\_open} saves a channel identifier. If \textsl{ch\_open} 
is called again, a \textsl{ch\_max} output is generated. The channel identifier is removed by a \textsl{ch\_close} input leading to dual 
states with existing and non-existing channel identifiers, even states where channels are not relevant (like for example states before authentication).