learning_setup.tex 17.7 KB
Newer Older
Paul Fiterau Brostean's avatar
updates    
Paul Fiterau Brostean committed
1
\section{The learning setup} \label{sec:setup}
2
3
4
5
6
7
%This chapter will cover the setup used to infer the state machines. We provide a general setup outline in Section~\ref{components}. The tested SSH servers are described in Section~\ref{suts}, which were queried with the alphabet described in Section~\ref{alphabet}. Section~\ref{setup-handling} will cover the challenging SUT behaviour faced when implementing the mapper, and the adaptations that were made to overcome these challenges. Section~\ref{layers-individual} will discuss the relation between state machines for individual layers and the state machine of the complete SSH protocol. The conventions on visualisation of the inferred state machines are described in Section~\ref{visualisation}.

%Throughout this chapter, an individual SSH message to a SUT is denoted as a \textit{query}. A \textit{trace} is a sequence of multiple queries, starting from a SUT's initial state. Message names in this chapter are usually self-explanatory, but a mapping to the official RFC names is provided in Appendix~\ref{appendixa}.

%\section{Components}\label{components}

Paul Fiterau Brostean's avatar
Paul Fiterau Brostean committed
8
The learning setup consists of three components: the {\dlearner}, {\dmapper} and {\dsut}. The {\dlearner} generates abstract inputs, representing SSH messages. The {\dmapper} transforms these messages into well-formed SSH packets and sends them to the {\dsut}. The {\dsut} sends response packets back to the {\dmapper}, which in turn, translates these packets to abstract outputs. The {\dmapper} then sends the abstract outputs back to the {\dlearner}. 
9
10


Erik Poll's avatar
shit    
Erik Poll committed
11
The {\dlearner} uses LearnLib ~\cite{LearnLib2009}, a Java library implementing $L^{\ast}$ based algorithms for learning Mealy machines. The {\dmapper} is based on Paramiko, an open source SSH implementation written in Python\footnote{Paramiko is available at \url{http://www.paramiko.org/}}. We opted for Paramiko because its code is relatively well structured and documented. The {\dsut} can be any existing implementation of an SSH server. The three components communicate over sockets, as shown in Figure~\ref{fig:components}.
12
\begin{figure}
13
	\centering
14
  \includegraphics[scale=0.29]{components.pdf}
Paul Fiterau Brostean's avatar
updates    
Paul Fiterau Brostean committed
15
  \caption{The SSH learning setup.}
16
17
18
  \label{fig:components}
\end{figure}

Erik Poll's avatar
shit    
Erik Poll committed
19
20
21
22
SSH is a complex client-server protocol. In our work so far we concentrated on learning models of the implementation of the server, and not of the client.
We further restrict learning to only exploring the terminal service of the Connection layer, as we consider it to be the most interesting
from a security perspective. Algorithms for encryption, compression and hashing are left to default settings and are not purposefully explored. Also, the starting
state of the {\dsut} is one where a TCP connection has already been established and where SSH versions have been exchanged, which are prerequisites for starting the Transport layer protocol.
23

24
25
%figure
%It is therefore important to focus on messages for which interesting state-changing behaviour can be expected. 
Erik Poll's avatar
shit    
Erik Poll committed
26

27
\subsection{The learning alphabet}\label{subsec:alphabet}
28

Erik Poll's avatar
shit    
Erik Poll committed
29
30
31
The alphabet we use consists of inputs, which correspond to messages
sent to the server, and outputs, which correspond to messages received
from the server. We split the input alphabet into three parts, one
32
33
34
for each of the protocol layers. 
%\marginpar{\tiny Erik: the output alphabet is not discussed anywhere,
%but for the discussion of the mapper in the next section it should be}
35

Erik Poll's avatar
shit    
Erik Poll committed
36
37
38
39
40
41
Learning does not scale with a growing alphabet, and since we are only
learning models of servers, we remove those inputs that are not
intended to ever be sent to the server\footnote{This means we exclude
the messages \textsc{service\_accept}, \textsc{ua\_accept},
\textsc{ua\_failure}, \textsc{ua\_banner}, \textsc{ua\_pk\_ok},
\textsc{ua\_pw\_changereq}, \textsc{ch\_success} and
Erik Poll's avatar
Erik Poll committed
42
\textsc{ch\_failure} from our alphabet.}. Furthermore, from the
Erik Poll's avatar
shit    
Erik Poll committed
43
44
Connection layer we only use messages for channel management and the
terminal functionality.  Finally, because we will only explore
45
protocol behavior after SSH versions have been exchanged, we exclude
Erik Poll's avatar
Erik Poll committed
46
the messages for exchanging version numbers.  
47
48
%\marginpar{\tiny Erik: I
%rephrased all this to make it simpler. Is it still ok?}
49

Erik Poll's avatar
shit    
Erik Poll committed
50
51
52
53
54
55
56
57
The resulting lists of inputs for the three protocol layers are given
in tables~\ref{trans-alphabet}-\ref{conn-alphabet}.  In some
experiments, we used only a subset of the most essential inputs, to
further speed up experiments. This \textit{restricted alphabet}
significantly decreases the number of queries needed for learning
models while only marginally limiting explored behavior.  We discuss
this again in Section~\ref{sec:result}. Inputs included in the
restricted alphabet are marked by '*' in the tables below.
58

Erik Poll's avatar
Erik Poll committed
59
Table~\ref{trans-alphabet} lists the Transport layer inputs. We include a version of the \textsc{kexinit} message with \texttt{first\_kex\_packet\_follows} disabled.
60
This means no guess~\cite[p. 17]{rfc4253} is attempted on the {\dsut}'s parameter preferences. Consequently, the {\dsut} will have to send its own \textsc{kexinit} in order to 
Paul Fiterau Brostean's avatar
Paul Fiterau Brostean committed
61
convey its own parameter preferences before key exchange can proceed. Also included are inputs for establishing new keys (\textsc{kex30}, \textsc{newkeys}), disconnecting (\textsc{disconnect}), as well as the special inputs \textsc{ignore}, \textsc{unimpl} and \textsc{debug}. The latter are not interesting, as they are normally ignored by implementations. Hence they are excluded from our restricted alphabet. \textsc{disconnect} proved costly time wise, so was also excluded.
62
63
%We include two versions of the \textsc{kexinit} message, one where \texttt{first\_kex\_packet\_follows} is disabled, the other when it is enabled, in which case, the message would make a guess on the security parameters~\cite[p. 17]{rfc4253}. Our mapper can only handle correct key guesses, so the wrong-guess procedure as described in ~\cite[p. 19]{rfc4253} was not supported. 
%\textsc{ignore}, \textsc{unimpl} and \textsc{debug} 
64
%When needed, SUTs were configured to make this guess work by altering their cipher preferences. The SSH version and comment string (described in Section~\ref{ssh-run-trans}) was not queried because it does not follow the binary packet protocol.
65
66
67

\begin{table}[!ht]
\centering
68
\small
69
70
\begin{tabular}{ll}
\textbf{Message} & \textbf{Description} \\
71
\textsc{disconnect} & Terminates the current connection~\cite[p. 23]{rfc4253} \\
72
73
74
\textsc{ignore} & Has no intended effect~\cite[p. 24]{rfc4253} \\
\textsc{unimpl} & Intended response to an unimplemented message~\cite[p. 25]{rfc4253} \\
\textsc{debug} & Provides other party with debug information~\cite[p. 25]{rfc4253} \\
75
\textsc{kexinit}* & Sends parameter preferences~\cite[p. 17]{rfc4253} \\
76
%\textsc{guessinit}* & A \textsc{kexinit} after which a guessed \textsc{kex30} follows~\cite[p. 19]{rfc4253} \\
77
78
79
80
\textsc{kex30}* & Initializes the Diffie-Hellman key exchange~\cite[p. 21]{rfc4253} \\
\textsc{newkeys}* & Requests to take new keys into use~\cite[p. 21]{rfc4253} \\
\textsc{sr\_auth}* & Requests the authentication protocol~\cite[p. 23]{rfc4253} \\
\textsc{sr\_conn}* & Requests the connection protocol~\cite[p. 23]{rfc4253}
81
\end{tabular}
82
\caption{Transport layer inputs}
83
84
85
\label{trans-alphabet}
\end{table}

Erik Poll's avatar
Erik Poll committed
86
The Authentication layer defines one single client message type in the form of the authentication request~\cite[p. 4]{rfc4252}. Its parameters contain all information needed for authentication. Four authentication methods exist: none, password, public key and host-based. Our mapper supports all methods except the host-based authentication because some SUTs lack support for this feature. Both the public key and password methods have  \textsc{ok} and \textsc{nok} variants, which provide respectively correct and incorrect credentials. Our restricted alphabet supports only public key authentication, as the implementations processed this faster than the other authentication methods.
87
88
89

\begin{table}[!ht]
\centering
90
\small
91
92
93
\begin{tabular}{ll}
\textbf{Message} & \textbf{Description} \\
\textsc{ua\_none} & Authenticates with the ``none'' method~\cite[p. 7]{rfc4252} \\
Paul Fiterau Brostean's avatar
Paul Fiterau Brostean committed
94
95
\textsc{ua\_pk\_ok}* & Provides a valid name/key combination~\cite[p. 8]{rfc4252} \\
\textsc{ua\_pk\_nok}* & Provides an invalid name/key combination~\cite[p. 8]{rfc4252} \\
96
97
98
\textsc{ua\_pw\_ok} & Provides a valid name/password combination~\cite[p. 10]{rfc4252} \\
\textsc{ua\_pw\_nok} & Provides an invalid name/password combination~\cite[p. 10]{rfc4252} \\
\end{tabular}
99
\caption{Authentication layer inputs}
100
101
102
\label{auth-alphabet}
\end{table}

103
The Connection layer allows the client to manage channels and to request/run services over them. In accordance with our learning goal,
104
our mapper only supports inputs for requesting terminal emulation, plus inputs for channel management as shown in Table~\ref{conn-alphabet}.
105
The restricted alphabet only supports the most general channel management inputs. Those excluded are not expected to produce state change.
106

107
108
109

\begin{table}[!ht]
\centering
110
\small
111
112
\begin{tabular}{ll}
\textbf{Message} & \textbf{Description} \\
Paul Fiterau Brostean's avatar
Paul Fiterau Brostean committed
113
114
115
116
\textsc{ch\_open}* & Opens a new channel~\cite[p. 5]{rfc4254} \\
\textsc{ch\_close}* & Closes a channel~\cite[p. 9]{rfc4254} \\
\textsc{ch\_eof}* & Indicates that no more data will be sent~\cite[p. 9]{rfc4254} \\
\textsc{ch\_data}* & Sends data over the channel~\cite[p. 7]{rfc4254} \\
117
118
\textsc{ch\_edata} & Sends typed data over the channel~\cite[p. 8]{rfc4254} \\
\textsc{ch\_window\_adjust} & Adjusts the window size~\cite[p. 7]{rfc4254} \\
Paul Fiterau Brostean's avatar
Paul Fiterau Brostean committed
119
\textsc{ch\_request\_pty}* & Requests terminal emulation~\cite[p. 11]{rfc4254} \\
120
\end{tabular}
121
\caption{Connection layer inputs}
122
123
\label{conn-alphabet}
\end{table}
124
%The learning alphabet comprises of input/output messages by which the {\dlearner} interfaces with the {\dmapper}. Section~\ref{sec:ssh} outlines essential inputs, while Table X provides a summary
Paul Fiterau Brostean's avatar
updates    
Paul Fiterau Brostean committed
125
%of all messages available at each layer. \textit{\textit{}}
126

127
%table
128

Erik Poll's avatar
shit    
Erik Poll committed
129

130
\subsection{The mapper}\label{subsec:mapper}
Paul Fiterau Brostean's avatar
updates    
Paul Fiterau Brostean committed
131

Erik Poll's avatar
shit    
Erik Poll committed
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
The {\dmapper} must provide a translation between abstract messages
and well-formed SSH messages: it has to translate abstract inputs
listed in tables~\ref{trans-alphabet}-\ref{conn-alphabet} to actual
SSH packets, and translate the SSH packets received in answer
to our abstract outputs.

A special case here occurs when no output is received from the
{\dsut}; in that case the {\dmapper} gives back to the learner a
\textsc{no\_resp} message, to indicate that a time-out occurred.

The sheer complexity of the {\dmapper} meant that it was easier to
adapt an existing SSH implementation, rather than construct the
{\dmapper} from scratch. Paramiko already provides mechanisms for
encryption/decryption, as well as routines for constructing and
sending the different types of packets, and for receiving them. These
routines are called by control logic dictated by Paramiko's own state
machine.  The {\dmapper} was constructed by replacing this control
logic with one dictated by messages received from the {\dlearner}.
%over a socket connection

The {\dmapper} maintains a set of state variables to record parameters
of the ongoing session, including for example the server's preferences
for key exchange and encryption algorithm, parameters of these
155
protocols, and -- once it has been established -- the session key.
Erik Poll's avatar
shit    
Erik Poll committed
156
157
These parameters are updated when receiving messages from the server,
and are used to concretize inputs to actual SSH messages to the server.
Paul Fiterau Brostean's avatar
updates    
Paul Fiterau Brostean committed
158

Erik Poll's avatar
shit    
Erik Poll committed
159
160
161
162
163
164
165
166
167
168
169
170
171
For example, upon receiving a \textsc{kexinit}, the {\dmapper} saves
the {\dsut}'s preferences for key exchange, hashing and encryption
algorithms. Initially these parameters are all set to the defaults
that any server should support, as required by the RFC.  The
{\dmapper} supports Diffie-Hellman key exchange, which it will
initiate if it gets  a \textsc{kex30} input from the learner. 
After this, the {\dsut} responds with a \textsc{kex31} message
(assuming the protocol run so far is correct), and from this
message, the {\dmapper} saves the hash, as well as the new
keys. Receipt of the \textsc{newkeys} response from the {\dsut} will
make the {\dmapper} use the new keys earlier negotiated in place of
the older ones, if such existed.

172
173
174
175
176
177
178
179
The {\dmapper} contains a buffer for storing channels opened, which is initially empty.
On a \textsc{ch\_open} from the learner, the {\dmapper} adds a channel to the buffer
with a randomly generated channel identifier, on a \textsc{ch\_close}, it removes the channel
(if there was any). The buffer size, or the maximum number of opened channels, is limited to one.  Initially,
the buffer is empty. 

Lastly, the {\dmapper} also stores the sequence number of  the last received message from the {\dsut}.
This number is then used when constructing \textsc{unimpl} inputs. 
Erik Poll's avatar
shit    
Erik Poll committed
180
181

In the following cases, inputs are answered by the {\dmapper} directly
182
instead of being sent to the {\dsut} to find out its response:
183
\begin{enumerate}
184
185
186
\item on receiving a \textsc{ch\_open} input and the buffer has reached the size limit, the {\dmapper} directly responds with \textsc{ch\_max};
\item on receiving any input operating on a channel (all Connection layer inputs other than \textsc{ch\_open}) when the buffer is empty, the
{\dmapper} directly responds with \textsc{ch\_none};
Erik Poll's avatar
shit    
Erik Poll committed
187
\item if connection with the {\dsut} was terminated, the {\dmapper}
188
     responds with a \textsc{no\_conn} message, as sending further
Erik Poll's avatar
shit    
Erik Poll committed
189
     messages to the {\dsut} is pointless in that case;
190
\end{enumerate}
Erik Poll's avatar
Erik Poll committed
191
192
193
%
In many ways, the {\dmapper} acts similar to an SSH client, hence the
decision to built it by adapting an existing client implementation.
194
195


Erik Poll's avatar
Erik Poll committed
196
\subsection{Practical complications}
Erik Poll's avatar
shit    
Erik Poll committed
197

Erik Poll's avatar
Erik Poll committed
198
199
200
201
202
203
%There are three practical complications in learning models of SSH
%servers: (1) an SSH server may exhibit \emph{non-determistic}
%behaviour; (2) a single input to the server can produce a
%\emph{sequence} of outputs ratheer than just a single output, and (3)
%\emph{buffering} behaviour of the server. These complication are
%discussed below.
Erik Poll's avatar
shit    
Erik Poll committed
204
205

SSH implementations can exhibit non-determistic behaviour.  The
Erik Poll's avatar
Erik Poll committed
206
207
208
learning algorithm cannot cope with non-determinism -- learning will
not terminate -- so this has to be detected, which our {\dmapper} does.
There are a few sources of non-determinism in SSH:
209
\begin{enumerate}
Erik Poll's avatar
shit    
Erik Poll committed
210
\item Underspecification in the SSH specification (for example, by not
Erik Poll's avatar
Erik Poll committed
211
     specifying the order of certain messages) allows some
212
     non-deterministic behavior. Even if client
Erik Poll's avatar
shit    
Erik Poll committed
213
     and server do implement a fixed order for messages they sent, the
Erik Poll's avatar
Erik Poll committed
214
215
     asynchronous nature of communication means that the
     interleaving of sent and received messages may vary.  Moreover,
Erik Poll's avatar
shit    
Erik Poll committed
216
     client and server are free to intersperse \textsc{debug} and
Erik Poll's avatar
Erik Poll committed
217
218
     \textsc{ignore} messages at any given time\footnote{The \textsc{ignore}
     messages are aimed to thwart traffic analysis.}
219
\item Timing is another source of non-deterministic behavior. For
Erik Poll's avatar
shit    
Erik Poll committed
220
     example, the {\dmapper} might time-out before the {\dsut} had
221
     sent its response. Some {\dsuts} also behave 
Erik Poll's avatar
shit    
Erik Poll committed
222
     unexpectedly when a new query is received too shortly after the
223
224
225
226
227
228
229
230
231
     previous one. Hence in our experiments we adjusted time-out 
		 periods accordingly so that neither of these events occur, and the {\dsut}
		 behaves deterministically all the time.
     %did not occur.  
		
		%However, other timing-related quirks can still
     %cause non-determinism. For example, some {\dsuts} behave
     %unexpectedly when a new query is received too shortly after the
     %previous one.
232
233
%For example, a trace in which a valid user authentication is performed within five milliseconds after an authentication request on DropBear can cause the authentication to (wrongly) fail.  
\end{enumerate}
Erik Poll's avatar
shit    
Erik Poll committed
234
%
Erik Poll's avatar
Erik Poll committed
235
236
237
238
239
240
241
To detect non-determinism, the {\dmapper} caches all observations
in an SQLite database and verifies if a new observation is different
to one cached from a previous protocol run.  If so, it raises
a warning, which then needs to be manually investigated.  

An added benefit of this cache is that is allows the {\dmapper} to
supply answer to some inputs without actually sending them to the
242
{\dsut}. This sped up learning a lot when we had to restart
Erik Poll's avatar
Erik Poll committed
243
244
245
experiments: any new experiment on the same {\dsut} could start where
the previous experiment left of, without re-running all inputs.  This
was an important benefit, as experiments could take several days.
Paul Fiterau Brostean's avatar
Paul Fiterau Brostean committed
246
247

%A subsequent identical learning run can quickly resume from where the previous one was ended, as the cache from the previous run is used to quickly respond to all queries up to the point the previous run ended.
248

Erik Poll's avatar
Erik Poll committed
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
Another practical problem beside non-determinism is that an SSH server
may produce a sequence of outputs in response to a single input. This
means it is not behaving as a Mealy machines, which allows for
only one output. Dealing with this is simple: the {\dmapper}
concatenates all outputs into one, and it produces this sequence as
the single output to the {\dlearner}. 

A final challenge is presented by forms of `buffering', which we
encountered in two situations.  Firstly, some implementations buffer
incoming requests during a key re-exchange; only once the rekeying is
complete are all these messages processed. This leads to a
\textsc{newkeys} response (indicating rekeying has completed),
directly followed by all the responses to the buffered requests.  This
would lead to non-termination of the learning algorithm, as for every
sequence of buffered messages the response is different.  To
prevent this, we treat the sequence of queued responses as a single
output \textsc{buffered}.

Secondly, buffering happens when opening and closing channels, since a
{\dsut} can close only as many channels as have previously been opened.
269
Learning this behavior would lead to an infinite state machine, as we
Erik Poll's avatar
Erik Poll committed
270
271
would need a state `there are $n$ channels open' for every number $n$.
For this reason, we restrict the number of simultaneously open
Erik Poll's avatar
shit    
Erik Poll committed
272
273
274
channels to one. The {\dmapper} returns a custom response
\textsc{ch\_max} to a \textsc{ch\_open} message whenever this limit is
reached.