Commit a38b4252 authored by Paul Fiterau Brostean's avatar Paul Fiterau Brostean

Updates. Pdf model

parent 4d82876a
......@@ -2,13 +2,13 @@
We have combined model learning with abstraction techniques to infer models of the OpenSSH, Bitvise and DropBear SSH server implementations. We have also
formalized several security and functional properties drawn from the SSH RFC specifications. We have verified these
properties on the learned models using model checking and have uncovered several minor inconsistencies, though crucially, the main security goals were met
properties on the learned models using model checking and have uncovered several minor inconsistencies, though crucially, the security critical properties were met
by all implementation.
Abstraction was provided by a {\dmapper} component placed between the {\dlearner} and the {\dsut}. The {\dmapper} was constructed from an existing SSH
implementation. The alphabet the {\dmapper} exposed to the {\dlearner} explored key exchange and setting up a secure connection, several authentication methods
implementation. The alphabet the {\dmapper} exposed to the {\dlearner}, explored key exchange and setting up a secure connection, several authentication methods
and opening and closing single of channels over which the terminal service could be requested. We used two alphabets, a full version for OpenSSH, and
a restricted version for the other implementations. The restricted alphabet was still sufficient to explore all aforementioned behavior.
a restricted version for the other implementations. The restricted alphabet was still sufficient to explore most aforementioned behavior.
There were several challenges encountered. Firstly, building a {\dmapper} presented a considerable technical challenge, as it required re-structuring of an actual
SSH implementation. Secondly, because we used classical learning algorithms, we had to ensure that the abstracted implementation behaved
......@@ -21,7 +21,7 @@ produce a concretization of the abstract models. Consequently, model checking re
abstraction produced by the {\dmapper} could be made less coarse, so more insight into the implementation is gained. Parameters which are now abstracted,
such as the channel identifier or data sent over channels, could be extracted and potentially handled by Register Automata learners. The {\dmapper} also caused
redundancy in the learned models, re-tweaking the abstractions used, in particular those for managing channels, could alleviate this problem while also
improving learning times. This in turn would ease the task of using expanded alphabets in learning instead of resorting to restricted alphabets. Another thing which
improving learning times. This in turn would facilitate learning using expanded alphabets instead of resorting to restricted alphabets. Another thing which
warrants improvement is the handling of buffering behavior. Buffering leads to many uninteresting states and greatly lengthens the learning process.
Despite these limitations, our work provides a compelling application of learning and model checking in a security setting, on a widely used protocol. We hope this lays
......
......@@ -211,6 +211,7 @@ label=""
rauth; //s17
rchan; //s22
rpty; //s27
ptyexit[style=invisible shape=point];
}
}
rauth -> rchan;
......@@ -229,6 +230,12 @@ s18 -> rchan[style="dashed"]//[label="REKEY SEQUENCE" style="dashed"] //lhead=c
s24 -> rpty[style="dashed"]//[label="REKEY SEQUENCE" style="dashed"] //lhead=cluster_rekey_states]
s18 -> s2 [label="DISCONNECT, NEWKEYS
/ NO_CONN" ltail=cluster_conn]
ptyexit [style=invisible shape=point]
s24 -> ptyexit
rpty -> ptyexit
ptyexit -> s2 [label="CH_REQUEST_PTY / NO_CONN"]
//rpty -> s2 [label="CH_REQUEST_PTY / DISCONNECT"]
//s4 -> s9 [style=invisible]
//s1 -> s5 [style=invisible]
......
......@@ -7,18 +7,18 @@ Formal methods have also been applied. Poll et. al. in \cite{Poll_rigorous_2011}
%In \cite{Boss2012}, model based testing was used to verify
%Academic researchers have so far focused more on the theoretical aspects than on implementations of the protocol.
In this work, we use classical active automata learning, or simply model learning, to infer state machines of three SSH implementations, which we then verify by model checking.
Model learning has previously been applied to infer state machines of EMV bank cards~\cite{Aarts2013Formal}, electronic passports~\cite{Aarts2010Inference} and hand-held readers for online banking~\cite{Chalupar2014Automated}. More recently, it was used to learn implementations of TCP~\cite{TCP2016} and TLS~\cite{RuiterProtocol}. Model learning's goal is obtaining a state model of a black-box system by providing inputs and observing outputs. The learned state model corresponds to the observed behavior, and can be used in system analysis. Since model learning builds from a finite number of observations, we can never be sure that the learned model is correct. To that end, advanced conformance algorithms are employed, which yield some confidence that the system inferred is in fact correct. In the context of testing protocols, model learning can be seen as a form of protocol state fuzzing, whereby unexpected inputs are sent to a system under test in the hope of exposing hidden anomalies. In model learning, inputs are sent with no regard to the order imposed by the protocol. Inputs can therefore be sent disorderly. Any anomalies are then exposed in the learned model.
In this work, we use classical active automata learning, or simply, model learning, to infer state machines of three SSH implementations, which we then verify by model checking.
Model learning has previously been applied to infer state machines of EMV bank cards~\cite{Aarts2013Formal}, electronic passports~\cite{Aarts2010Inference} and hand-held readers for online banking~\cite{Chalupar2014Automated}. More recently, it was used to learn implementations of TCP~\cite{TCP2016} and TLS~\cite{RuiterProtocol}. Model learning's goal is to obtain a state model of a black-box system by providing inputs and observing outputs. The learned state model corresponds to the observed behavior, and can be used in system analysis. Since model learning builds from a finite number of observations, we can never be sure that the learned model is correct. To that end, advanced conformance algorithms are employed\cite{LeeY96}, which yield some confidence that the system inferred is in fact correct. In the context of testing protocols, model learning can be seen as a form of protocol state fuzzing, whereby unexpected inputs are sent to a system under test in the hope of exposing hidden anomalies. In model learning, inputs are sent with no regard to the order imposed by the protocol. Any anomalies are then exposed in the learned model.
Having obtained models, we use model checking to automatically verify their conformance to both functional and security properties. The security properties are drawn out of the RFC specifications\cite{rfc4251,rfc4252,rfc4253,rfc4254} and formalized in LTL. They are then checked for truth on the learned model using NuSMV~\cite{NuSMV}. Manually verifying these properties would be difficult, as the learned models are reasonably large. Moreover, formalizing properties means we can also better assess vagueness or under-specification in the RFC standards.
Having obtained models, we use model checking to automatically verify their conformance to both functional and security properties. The properties are drawn out of the RFC specifications\cite{rfc4251,rfc4252,rfc4253,rfc4254} and formalized in LTL. They are then checked for truth on the learned model using NuSMV~\cite{NuSMV}. Manually verifying these properties would be difficult, as the learned models are reasonably large. Moreover, formalizing properties means we can also better assess and overcome vagueness or under-specification in the RFC standards.
%Moreover, the very satisfaction of properties can be used to refine the learned model.
%In cases where the learned model does not satisfy a security property, the proving sequence of inputs can either signal non-conformance of the system, or can be used to further refine the learned model.
Our work is borne out of 2 recent theses \cite{Verleg2016,Toon2016} and to our knowledge, is the first combined application of model learning and model checking in verifying SSH implementations.
Previous applications of formal verification methods involve model based testing\cite{Boss2012} or static analysis\cite{Poll_rigorous_2011,Udrea_rule-based_2008}. The former limited analysis to the Transport layer. The latter required access to source code, which might not be available for proprietary implementations. \cite{Udrea_rule-based_2008} formulated an extensive set of rules, which were then used by a static analysis tool to verify C implementations. The set of rules covered all layers and it described both elements of message ordering and data flow, whereas we only analyze message ordering. However, rules were tied to routines in the code, so had to be slightly adapted to fit the different implementations. By contrast, our properties are defined at an abstract level so don't need to be adapted. Moreover employing a black box approach makes our approach easily portable and means we don't impose any constraints on the actual implementation of the system (like for example, that it is implemented in C).
Previous applications of formal verification methods involve model based testing\cite{Boss2012} or static analysis\cite{Poll_rigorous_2011,Udrea_rule-based_2008}. The former limited analysis to the Transport layer. The latter required access to source code, which might not be available for proprietary implementations. \cite{Udrea_rule-based_2008} formulated an extensive set of rules, which were then used by a static analysis tool to verify C implementations. The set of rules covered all layers and it described both elements of message ordering and data flow, whereas we only analyze message ordering. However, rules were tied to routines in the code, so had to be slightly adapted to fit the different implementations. By contrast, our properties are defined at an abstract level so don't need such tailoring. Moreover employing a black box approach makes our approach easily portable and means we don't impose any constraints on the actual implementation of the system (like for example, that it is implemented in C).
Our approach, most closely resembles work on TCP~\cite{TCP2016}, where they also combine classical learning and model checking to analyze various TCP implementations, with our work being more focused on security properties. Other case studies involving model learning rely on manual analysis of the learned models ~\cite{Aarts2013Formal},\cite{Chalupar2014Automated}, \cite{RuiterProtocol}. Our work differs, since we use a model checker to automatically check specifications. Inference of protocols has also been done from observing network traffic ~\cite{Wang2011Inferring}. Such inference is limited to the traces that occur over a network. Such inference is further hampered if the analyzed protocols communicate over encrypted channels, as this severely limits
Our approach, most closely resembles work on TCP~\cite{TCP2016}, where they also combine classical learning and model checking to analyze various TCP implementations, with our work being more focused on security properties. Other case studies involving model learning rely on manual analysis of the learned models ~\cite{Aarts2013Formal,Chalupar2014Automated,RuiterProtocol}. Our work differs, since we use a model checker to automatically check specifications. Inference of protocols has also been done from observing network traffic ~\cite{Wang2011Inferring}. Such inference is limited to the traces that occur over a network. Inference is further hampered if the analyzed protocols communicate over encrypted channels, as this severely limits
information available from traces without knowledge of the security key.
%Whereas the combined work of \cite{Poll_rigurous2011}, \cite{Boss2012} leverages model
%based testing on a specification model, we
......
......@@ -5,7 +5,7 @@ OpenSSH represents the focal point, as it is the most popular implementation of
and also, the default server for many UNIX-based systems. DropBear is an alternative to OpenSSH designed for low resource
systems. BitVise is a well known proprietary Windows-only SSH implementation.
In our experimental setup, the {\dlearner} and {\dmapper} were running in a Linux Virtual Machine. OpenSSH and DropBear were
In our experimental setup, the {\dlearner} and {\dmapper} were running inside a Linux Virtual Machine. OpenSSH and DropBear were
learned over a localhost connection, whereas BitVise was learned over a virtual connection with the Windows host machine.
Certain arrangements had to be made including the setting of timing parameters to fit each implementation.
......@@ -17,13 +17,13 @@ Certain arrangements had to be made including the setting of timing parameters t
\label{fig:sshserver}
\end{figure*}
OpenSSH was learned using a full alphabet, whereas DropBear and BitVise were learned using a reduced alphabet. Both versions of
the alphabets are described in Subsection~\ref{subsec:alphabet}. The primary reason for using a reduced alphabet was to reduce learning times.
OpenSSH was learned using a full alphabet, whereas DropBear and BitVise were learned using a restricted alphabet. Both versions of
the alphabets are described in Subsection~\ref{subsec:alphabet}. The primary reason for using a restricted alphabet was to reduce learning times.
Most inputs excluded were inputs that either didn't change behavior (like \textsc{debug} or \textsc{unimpl}), or that proved costly time-wise,
and were not critical to penetrating all layers. A concrete example is the user/password based authentication inputs (\textsc{ua\_pw\_ok} and
\textsc{ua\_pw\_nok}). It would take the system 2-3 seconds to respond to an invalid password, perhaps in an attempt by the designers to thwart
brute force attacks. By contrast, public key authentication resulted in quick responses. The \textsc{disconnect} input presented similar
challenges, particularly for BitVise.
challenges, as it would take a varying amount of time until the system responded. This was particularly problematic for BitVise.
%Not only that, but failed password authentication
%attempts are also likely to trigger security mechanisms that would block subsequent authentication attempts. While this is
......@@ -65,11 +65,14 @@ DropBear v2014.65 & 17 & 2 & 19863
The large number of states is down to several reasons. First of all, some systems exhibited buffering behavior. In particular, BitVise would queue
responses for higher layer inputs sent during key re-exchange, and would deliver them all at once, after the exchange was done. Re-exchanging keys (rekey-ing) was also
a major contributor to the number of states. In states permitting rekey, following the sequence of transitions comprising the rekey should lead back to the starting state. This
a major contributor to the number of states. In states allowing rekey, following the sequence of transitions comprising the rekey should lead back to the starting state. This
leads to 2 additional rekey states for every state permitting rekey. A considerable number of states were also added due to {\dmapper} generated outputs such as \textsc{ch\_none} or \textsc{ch\_max}, outputs which signal that no channel is open or that the maximum number of channels have been opened.
Figure~\ref{fig:sshserver} shows the model learned for OpenSSH, with various changes applied to improve readability. The happy flow, contoured in green, is fully explored in the model and mostly matches our earlier description of it\footnote{The only exception is in the transport layer, where unlike in our happy flow definition, the server is the first to send the \textsc{newkeys} message. This is also accepted behavior, as the protocol does not specify which side should send the \textsc{newkeys} first.}. Also explored is what happens when a rekey sequence is attempted. We notice that rekey is only allowed in states of the Connection layer. Strangely, for these states, rekey is not state preserving, as the generated output on receiving a \textsc{sr\_auth}, \textsc{sr\_conn} or \textsc{kex30} changes from \textsc{unimpl} to \textsc{no\_resp}. This leads to two sub-clusters of states, one before the first rekey, the other afterward. In all other states, the first step of a rekey (\textsc{kexinit}) yields (\textsc{unimpl}), while the last step (\textsc{newkeys}) causes the system to disconnect. We also notice the intricate authentication behavior, it seems that password authentication is the only form that allows you to authenticate after an unsuccessful attempt.
Figure~\ref{fig:sshserver} shows the model learned for OpenSSH, with various changes applied to improve readability. The happy flow, contoured in green, is fully explored in the model and mostly matches our earlier description of it\footnote{The only exception is in the transport layer, where unlike in our happy flow definition, the server is the first to send the \textsc{newkeys} message. This is also accepted behavior, as the protocol does not specify which side should send \textsc{newkeys} first.}. Also explored is what happens when a rekey sequence is attempted. We notice that rekey is only allowed in states of the Connection layer. Strangely, for these states, rekey is not state preserving, as the generated output on receiving a \textsc{sr\_auth}, \textsc{sr\_conn} or \textsc{kex30} changes from \textsc{unimpl} to \textsc{no\_resp}. This leads to two sub-clusters of states, one before the first rekey, the other afterward. In all other states, the first step of a rekey (\textsc{kexinit}) yields (\textsc{unimpl}), while the last step (\textsc{newkeys}) causes the system to disconnect.
We also found strange how system reacted to \textsc{sr\_conn}, request for services of the Connection layer. First of all, these services could be accessed once the user had authenticated, without the need of an explicit service request. But then, making this explicit request either lead to \textsc{unimpl}/\textsc{no\_resp}, as in the case of OpenSSH, or termination of the connection, as in the case of \textsc{BitVise}. The latter was particularly strange, as in theory, once authenticated, the service request should have become accessible to the user. Only \textsc{DropBear} seems to respond positively (\textsc{sr\_accept}) to \textsc{sr\_conn} after authentication.
We also notice the intricate authentication behavior, it seems that password authentication is the only form that allows you to authenticate after an unsuccessful attempt. Finally, of all implementations tested, only BitVise allowed multiple terminals to be requested over the same channel. As depicted in the model, OpenSSH abruptly terminates on requesting a second terminal. DropBear exhibits a similar behavior.
......
......@@ -5,10 +5,10 @@
%\section{Components}\label{components}
The learning setup consists of three components: a \emph{learner}, \emph{mapper} and the SUT. The {\dlearner} generates abstract inputs, representing SSH messages. The {\dmapper} transforms these messages into well-formed SSH packets and sends them to the {\dsut}. The {\dsut} sends response packets back to the {\dmapper}, which in turn, translates these packets to abstract outputs. The {\dmapper} then sends the abstract outputs back to the learner.
The learning setup consists of three components: the {\dlearner}, {\dmapper} and {\dsut}. The {\dlearner} generates abstract inputs, representing SSH messages. The {\dmapper} transforms these messages into well-formed SSH packets and sends them to the {\dsut}. The {\dsut} sends response packets back to the {\dmapper}, which in turn, translates these packets to abstract outputs. The {\dmapper} then sends the abstract outputs back to the {\dlearner}.
The learner uses LearnLib ~\cite{LearnLib2009}, a Java library implementing $L^{\ast}$ based algorithms for learning Mealy machines. The {\dmapper} is based on Paramiko, an open source SSH implementation written in Python\footnote{Paramiko is available at \url{http://www.paramiko.org/}}. We opted for Paramiko because its code is relatively well structured and documented. The {\dsut} can be any existing implementation of a SSH server. The {\dlearner} communicates with the {\dmapper} over sockets. A graphical representation of our setup is shown in Figure~\ref{fig:components}.
The {\dlearner} uses LearnLib ~\cite{LearnLib2009}, a Java library implementing $L^{\ast}$ based algorithms for learning Mealy machines. The {\dmapper} is based on Paramiko, an open source SSH implementation written in Python\footnote{Paramiko is available at \url{http://www.paramiko.org/}}. We opted for Paramiko because its code is relatively well structured and documented. The {\dsut} can be any existing implementation of an SSH server. The three components communicate over sockets. A graphical representation of our setup is shown in Figure~\ref{fig:components}.
\begin{figure}
\centering
\includegraphics[scale=0.29]{components.pdf}
......@@ -17,14 +17,14 @@ The learner uses LearnLib ~\cite{LearnLib2009}, a Java library implementing $L^{
\end{figure}
SSH is a complex client-server type protocol. It would be exceedingly difficult to learn all its facets, thus we narrow down the learning goal to learning SSH
server implementations. We further restrict learning to only exploring the terminal service of the connection layer, as we consider it to be the most interesting
server implementations. We further restrict learning to only exploring the terminal service of the Connection layer, as we consider it to be the most interesting
from a security perspective. Algorithms for encryption, compression and hashing are set to default settings and are not purposefully explored. Also, the starting
state of the {\dsut} is one where a TCP connection has already been established and where SSH versions have been exchanged, a required first step of the Transport layer protocol.
%figure
%It is therefore important to focus on messages for which interesting state-changing behaviour can be expected.
\subsection{The learning alphabet}\label{subsec:alphabet}
We split the learning alphabet into 3 groups, corresponding to the three layers. %Each input in the alphabet has a corresponding message number in the SSH specification.
We split the learning alphabet into three groups, corresponding to the three layers. %Each input in the alphabet has a corresponding message number in the SSH specification.
Learning doesn't scale with a growing alphabet, hence it is important to reduce the alphabet to those inputs that trigger interesting behavior. We do this by
only selecting inputs that are consistent with our learning goal.
......@@ -41,7 +41,7 @@ in exploring SSH's behavior once the initial exchange took place. From the input
Table~\ref{trans-alphabet} introduces the Transport layer inputs. We include a version of the \textsc{kexinit} message with \texttt{first\_kex\_packet\_follows} disabled.
This means no guess~\cite[p. 17]{rfc4253} is attempted on the {\dsut}'s parameter preferences. Consequently, the {\dsut} will have to send its own \textsc{kexinit} in order to
convey its own parameter preferences before key exchange can proceed. Also included are inputs for establishing new keys (\textsc{kex30}, \textsc{newkeys}), disconnecting (\textsc{disconnect}), as well as the special inputs \textsc{ignore}, \textsc{unimpl} and \textsc{debug}. The latter are not expected to alter behavior so are excluded from our restricted alphabet. \textsc{disconnect} proved costly time wise, so was also excluded.
convey its own parameter preferences before key exchange can proceed. Also included are inputs for establishing new keys (\textsc{kex30}, \textsc{newkeys}), disconnecting (\textsc{disconnect}), as well as the special inputs \textsc{ignore}, \textsc{unimpl} and \textsc{debug}. The latter are not interesting, as they are normally ignored by implementations. Hence they are excluded from our restricted alphabet. \textsc{disconnect} proved costly time wise, so was also excluded.
%We include two versions of the \textsc{kexinit} message, one where \texttt{first\_kex\_packet\_follows} is disabled, the other when it is enabled, in which case, the message would make a guess on the security parameters~\cite[p. 17]{rfc4253}. Our mapper can only handle correct key guesses, so the wrong-guess procedure as described in ~\cite[p. 19]{rfc4253} was not supported.
%\textsc{ignore}, \textsc{unimpl} and \textsc{debug}
%When needed, SUTs were configured to make this guess work by altering their cipher preferences. The SSH version and comment string (described in Section~\ref{ssh-run-trans}) was not queried because it does not follow the binary packet protocol.
......@@ -66,7 +66,7 @@ convey its own parameter preferences before key exchange can proceed. Also inclu
\label{trans-alphabet}
\end{table}
The Authentication layer defines one single client message type in the form of the authentication request~\cite[p. 4]{rfc4252}. Its parameters contain all information needed for authentication. Four authentication methods exist: none, password, public key and host-based. Our mapper supports all methods except the host-based authentication because various SUTs lack support for this feature. Both the public key and password methods have an \textsc{ok} and a \textsc{nok} variant which provides respectively correct and incorrect credentials. Our restricted alphabet supports only public key authentication, as it was processed the fastest by implementations out of all authentication forms.
The Authentication layer defines one single client message type in the form of the authentication request~\cite[p. 4]{rfc4252}. Its parameters contain all information needed for authentication. Four authentication methods exist: none, password, public key and host-based. Our mapper supports all methods except the host-based authentication because various SUTs lack support for this feature. Both the public key and password methods have \textsc{ok} and \textsc{nok} variants, which provide respectively correct and incorrect credentials. Our restricted alphabet supports only public key authentication, as it was processed the fastest by implementations out of all authentication forms.
\begin{table}[!ht]
\centering
......@@ -110,14 +110,15 @@ The restricted alphabet only supports the most general channel management inputs
%table
\subsection{The mapper}
The {\dmapper} must provide translation between abstract message representations and well-formed SSH messages. A special case is for when no output is received from the {\dsut}, in which case the {\dmapper} gives back to the learner a {\dtimeout} message, concluding a timeout occurred. The sheer complexity of the {\dmapper}, meant that it was easier to adapt an existing SSH implementation, rather than construct the {\dmapper} from scratch. Paramiko already provides mechanisms for encryption/decryption, as well as routines for sending the different types of packets, and for receiving them. These routines are called by control logic dictated by Paramiko's own state machine. The {\dmapper} was constructed by replacing this control logic with one dictated by messages received from the {\dlearner}. %over a socket connection
The {\dmapper} must provide translation between abstract message representations and well-formed SSH messages. A special case is for when no output is received from the {\dsut}, in which case the {\dmapper} gives back to the learner a \textsc{no\_resp} message, concluding a timeout occurred. The sheer complexity of the {\dmapper}, meant that it was easier to adapt an existing SSH implementation, rather than construct the {\dmapper} from scratch. Paramiko already provides mechanisms for encryption/decryption, as well as routines for sending the different types of packets, and for receiving them. These routines are called by control logic dictated by Paramiko's own state machine. The {\dmapper} was constructed by replacing this control logic with one dictated by messages received from the {\dlearner}. %over a socket connection
The {\dmapper} maintains a set of state variables which have initial values, and which are updated on specific outputs and used in concretize certain inputs. On receiving a \textsc{kexinit}, the {\dmapper} saves the {\dsut}'s parameter preferences. These preferences define the key exchange, hashing and encryption algorithms supported by the {\dsut}. Before such receipt, these parameters are defaulted to those that any implementation should support, as required by the RFC. Based on these parameters, a key exchange algorithm may be run. The {\dmapper} supports Diffie-Hellman, which it can initiate via a \textsc{kex30} input from the learner. The {\dsut} responds with \textsc{kex31} if the inputs were orderly sent. From \textsc{kex31}, the {\dmapper} saves the hash, as well as the new keys. Receipt of the \textsc{newkeys} response from the {\dsut} will make the {\dmapper} use the new keys earlier negotiated in place of the older ones. The {\dmapper} contains two other state variables, used for storing the channel and sequence numbers respectively. The former is retrieved from a \textsc{ch\_accept} response and re-used in the other channel-type inputs, the latter each retrieved from each packet received and used in \textsc{unimpl} inputs. Both variables are initially set to 0.
The {\dmapper} maintains a set of state variables which have initial values, and which are updated on specific outputs and used to concretize certain inputs. On receiving a \textsc{kexinit}, the {\dmapper} saves the {\dsut}'s parameter preferences. These preferences define the key exchange, hashing and encryption algorithms supported by the {\dsut}. Before such receipt, these parameters are defaulted to those that any implementation should support, as required by the RFC. Based on these parameters, a key exchange algorithm may be run. The {\dmapper} supports Diffie-Hellman, which it can initiate via a \textsc{kex30} input from the learner. The {\dsut} responds with \textsc{kex31} if the inputs were orderly sent. From \textsc{kex31}, the {\dmapper} saves the hash, as well as the new keys. Receipt of the \textsc{newkeys} response from the {\dsut} will make the {\dmapper} use the new keys earlier negotiated in place of the older ones, if such existed. The {\dmapper} contains two other state variables,
one is a buffer for storing open channels. It is initially empty and is increased/decreased on \textsc{ch\_open} and \textsc{ch\_close} inputs respectively. The other is initially set to 0, and stores the sequence number of the last received message. This is then used when constructing \textsc{unimpl} inputs.
In certain scenarios, inputs are answered by the {\dmapper} directly instead of being sent to the {\dsut}. These scenarios include the following:
\begin{enumerate}
\item if connection with the {\dsut} was terminated, the {\dmapper} responds with a \textsc{no\_conn} message
\item no channel has been opened or the maximum number of channels was reached (in our experiments 1), cases which prompt the {\dmapper}
\item no channel has been opened (the buffer variable is empty) or the maximum number of channels was reached (in our experiments 1), cases which prompt the {\dmapper}
to respond with \textsc{ch\_none}, or \textsc{ch\_max} respectively
\end{enumerate}
......@@ -125,7 +126,6 @@ Overall, we notice that in many ways, the {\dmapper} acts similarly to an SSH cl
implementation.
\subsection{Compacting SSH into a small Mealy machine}
The {\dmapper} not only provides abstraction, it also ensures that the abstract representation shown to the learner behaves like a deterministic Mealy Machine.
This is needed, as SSH implementations are inherently \textit{non-determistic}. Sources of non-determinism can be divided into three categories:
......@@ -137,13 +137,15 @@ timeout values accordingly, so that enough time was allowed for the response to
%For example, a trace in which a valid user authentication is performed within five milliseconds after an authentication request on DropBear can cause the authentication to (wrongly) fail.
\end{enumerate}
To detect non-determinism, we cache all new observations in an SqlLite database and verify observations against this cache. The cache also enables us to answer to queries answered before
without running inputs on the {\dsut}. A subsequent identical learning run can quickly resume from where the previous one was ended, as the cache from the previous run is used to quickly respond to all queries up to the point the previous run ended.
To detect non-determinism, we cache all new observations in an SQLite database and verify observations against this cache. The cache also enables us to answer to queries answered before
without running inputs on the {\dsut}. This transcended consecutive learning runs, as we could reload the cache from a previous run, enabling quick response of all queries up to the point the previous run ended.
%A subsequent identical learning run can quickly resume from where the previous one was ended, as the cache from the previous run is used to quickly respond to all queries up to the point the previous run ended.
Aside from non-determinism, SSH implementations can also produce a sequence of outputs in response to an input, whereas Mealy machines allow for only one output. To that end, the {\dmapper}
concatenates all outputs into one, delivering a single output to the {\dlearner}.
Another challenge is presented by buffering, leading to an explosion of states, as buffers cannot be described succintly by Mealy Machines.
We have encountered buffers in two occasions. Firstly, some implementations buffer certain responses when in re-key exchange. As soon as re-keying completes, these queued messages are released all at once. This leads to a \textsc{newkeys} response (indicating re-keying has completed), directly followed by all buffered responses. This, combined with concatenation of the multiple output responses would lead to non-termination of the learning algorithm, as for every variant of the response queue there would be a different output. To counter this, we replace the concatenation of queued output responses by the single string \textsc{buffered}, thus forming \textsc{newkeys\_buffered}.
We have encountered buffers in two occasions. Firstly, some implementations buffer certain responses when in key re-exchange. As soon as rekeying completes, these queued messages are released all at once. This leads to a \textsc{newkeys} response (indicating rekeying has completed), directly followed by all buffered responses. This, combined with concatenation of the multiple output responses would lead to non-termination of the learning algorithm, as for every variant of the response queue there would be a different output. To counter this, we replace the concatenation of queued output responses by the single string \textsc{buffered}, thus forming \textsc{newkeys\_buffered}.
Buffer behaviour can also be observed when opening and closing channels, since a SUT can close only as many channels as have previously been opened. For this reason, we restricted the number of simultaneously open channels to one. The {\dmapper} returns a custom response \textsc{ch\_max} to a \textsc{ch\_open} message whenever this limit is reached.
\ No newline at end of file
Buffer behaviour can also be observed when opening and closing channels, since a SUT can close only as many channels as have previously been opened. For this reason, we restrict the number of simultaneously open channels to one. The {\dmapper} returns a custom response \textsc{ch\_max} to a \textsc{ch\_open} message whenever this limit is reached.
\ No newline at end of file
......@@ -153,10 +153,11 @@ tabsize=2
\begin{abstract}
We apply protocol state fuzzing on two SSH implementations to infer state machines, which
are then verified by a model checker for basic security properties. Our results show that
all tested SSH servers adhere to these properties. That said, their corresponding state machines differ significantly.
These variances allow anyone to effectively fingerprint the tested servers.
We apply protocol state fuzzing on three SSH implementations to infer state machines, which
are then verified by a model checker for functional and security properties. Our results show that
all tested SSH servers adhere to the security properties, while satisfying the functional ones
only by a varying degree . Moreover, the corresponding state machines differ significantly.
These differences allow anyone to effectively fingerprint the analyzed implementations.
%Various shortcomings with regards to the RFCs were found. Opening multiple channels is not properly implemented on CiscoSSH and PowerShell. OpenSSH contains a bug which can result in connection closure after rekeying in some circumstances. Both Tectia and OpenSSH implement a liberal message acceptance policy in the first phase of the protocol. Such a liberal policy is unwise in this error-prone stage.
......
......@@ -5,7 +5,7 @@ $I$, $O$ and $Q$ are finite sets of \emph{inputs}, \emph{outputs} and \emph{stat
is the \emph{transition relation}. We say $q \xrightarrow{i/o} q'$ if $(q, i, o, q') \in \rightarrow$.
A Mealy machine $\mathcal{M}$ is \emph{input enabled} if for each state $q$ and input $i$, there is a transition $q \xrightarrow{i/o} q'$, for some $o$ and $q'$. Additionally,
a Mealy machine is said to be \emph{deterministic} if there is at most one such transition defined. In our setting, we restrict our representations of systems to Mealy machines that are both
a Mealy machine is said to be \emph{deterministic} if there is at most one such transition defined. Our setting restricts the representations of systems to Mealy machines that are both
input enabled and deterministic.
%\begin{figure}[h]
......@@ -28,15 +28,15 @@ input enabled and deterministic.
\subsection{MAT Framework} \label{ssec:mat}
The most efficient algorithms for model learning all follow
the pattern of a \emph{minimally adequate teacher (MAT)} as proposed by Angluin~\cite{Angluin1987Learning}.
In the MAT framework, learning is viewed as a game in which a learner has to infer an unknown automaton by asking queries to a teacher. The teacher knows the automaton, which in our setting is a deterministic Mealy machine $\M$.
Initially, the learner only knows the inputs $I$ and outputs $O$ of $\M$.
The task of the learner is to learn $\M$ through two types of queries:
In the MAT framework, learning is viewed as a game in which a \emph{learner} has to infer an unknown automaton by asking queries to a teacher. The teacher knows the automaton, which in our setting is a deterministic Mealy machine $\M$.
Initially, the {\dlearner} only knows the inputs $I$ and outputs $O$ of $\M$.
The task of the {\dlearner} is to learn $\M$ through two types of queries:
\begin{itemize}
\item
With a \emph{membership query}, the learner asks what the response is to an input sequence $\sigma \in I^{\ast}$.
The teacher answers with the output sequence in $A_{\M}(\sigma)$.
\item
With an \emph{equivalence query}, the learner asks whether a hypothesized Mealy machine $\CH$ is correct, that is,
With an \emph{equivalence query}, the {\dlearner} asks whether a hypothesized Mealy machine $\CH$ is correct, that is,
whether $\CH \approx \M$. The teacher answers \emph{yes} if this is the case. Otherwise it answers
\emph{no} and supplies a \emph{counterexample}, which is a sequence $\sigma \in I^{\ast}$ that triggers
a different output sequence for both Mealy machines, that is, $A_{\CH}(\sigma) \neq A_{\M}(\sigma)$.
......@@ -46,14 +46,14 @@ Model learning algorithms have been developed developed for learning determinist
a finite number of queries. We point to \cite{Isberner2015} for a recent overview. These algorithms are leveraged
in applications where one wants to learn a model of a black-box reactive system, or System Under Learning ({\dsut}). The teacher typically
consists of the {\dsut}, which answers membership queries, and a conformance
testing tool \cite{LeeY96} that approximates the equivalence queries using a set
of \emph{test queries}. A test query consists of asking to the {\dsut} for the response to an input sequence
testing tool \cite{LeeY96} that approximates equivalence queries using a set
of \emph{test queries}. A test query consists of asking the {\dsut} for the response to an input sequence
$\sigma \in I^{\ast}$, similar to a membership query.
\subsection{Abstraction} \label{ssec:mappers}
Most current learning algorithms are only applicable to Mealy machines with small alphabets comprising abstract messages. Practical systems typically
have parameterized input/output alphabets, whose application triggers updates on the system's state variables. To learn
these systems we place a \emph{mapper} between the {\dlearner} and the {\dsut}. The mapper is a transducer which translates
these systems we place a \emph{mapper} between the {\dlearner} and the {\dsut}. The {\dmapper} is a transducer which translates
concrete inputs to abstract inputs and concrete outputs to abstract outputs. For a thorough definition of mappers, we refer to \cite{AJUV15}.
%Perhaps some explanation
......
......@@ -6,8 +6,8 @@
The size models of the models makes them difficult to manually inspect and verify against specifications. Manual analysis is further complicated by the ambiguity present in textual specifications.
Hence it makes sense to (1) formalize specification so as to eliminate ambiguity and (2) use model checking to verify the specifications automatically. To these ends, we use the NuSMV model checker to verify security properties for the learned models, properties which we formalize using LTL formulas.
NuSMV is a model checker where a state of a model is specified as a set of finite variables, and a transition-function which makes changes on these variables. Specifications in temporal logic, such as CTL and LTL, can be checked for truth on specified models. NuSMV provides a counterexample if a given specification is not true. NuSMV models are generated automatically from the learned models. Generation proceeds by first defining in an empty NuSMV file three variables, corresponding to inputs, outputs and states. The transition-function is then extracted from the learned model and appended to the file.
This function updates the output and state variable for a given valuation of the input variable and the current state. Figure~\ref{fig:nusmvex} gives an example of a Mealy machine and it's associated NuSMV model.
NuSMV is a model checker where a model is specified as a set of finite variables, and a transition-function which makes changes on these variables. Specifications in temporal logic, such as CTL and LTL, can be checked for truth on specified models. NuSMV provides a counterexample if a given specification is not true. NuSMV models are generated automatically from the learned models. Generation proceeds by first defining in an empty NuSMV file three variables, corresponding to inputs, outputs and states. The transition-function is then extracted from the learned model and appended to the file.
This function updates the output and state variables for a given valuation of the input variable and the current state. Figure~\ref{fig:nusmvex} gives an example of a Mealy machine and it's associated NuSMV model.
%\parbox{\columnwidth}{
% \parbox{0.5\columnwidth}{
......@@ -83,7 +83,7 @@ A key note is that properties are checked not on the actual concrete model of th
\subsection{Basic characterizing properties}
%cannot be translated. %Though in practical terms, these results are still translatable, in particular for cases where properties are not met.
In our setting, once the connection is lost, it can no longer be recovered and the {\dmapper} will itself respond with \textsc{no\_conn} to any subsequent Connection layer inputs sent by the {\dlearner}. This behavior is described by Property~\ref{prop:noconn}, where \emph{isConnectionInput} is a predicate which only holds if the input supplied is a Connection layer input. The reason we exclude connection inputs is due to a mapper characteristic we touched on in Section~\ref{sec:result}. The {\dmapper} maintains a buffer of opened channels and limits its size to 1. From the perspective of the {\dmapper}, a channel is open, and thus added to the buffer, whenever \textsc{ch\_open} is received from the learner, regardless if a channel was actually opened on the {\dsut}. If an attempt to open an additional channel is made, the {\dmapper} itself responds by \textsc{ch\_max} without querying the {\dsut}. Conversely, if there is no channel open (the buffer is empty) and an input operating on a channel is sent, the {\dmapper} responds by \textsc{ch\_none}, again, without querying the {\dsut}. Additionally, a channel opened on the {\dmapper} is closed and removed from the buffer on a {ch\_close} from the {\dlearner}, with a corresponding SSH CHANNEL CLOSE message being sent to the {\dsut}. We use Property~\label{prop:channel} to describe this mapper induced behavior.
In our setting, once the connection is lost, it can no longer be recovered and the {\dmapper} will itself respond with \textsc{no\_conn} to any subsequent non-Connection layer inputs sent by the {\dlearner}. This behavior is described by Property~\ref{prop:noconn}, where \emph{isConnectionInput} is a predicate which only holds if the input supplied is a Connection layer input. The reason we exclude connection inputs is due to a mapper characteristic we touched on in Section~\ref{sec:result}. The {\dmapper} maintains a buffer of opened channels and limits its size to 1. From the perspective of the {\dmapper}, a channel is open, and thus added to the buffer, whenever \textsc{ch\_open} is received from the learner, regardless if a channel was actually opened on the {\dsut}. If an attempt to open an additional channel is made, the {\dmapper} itself responds by \textsc{ch\_max} without querying the {\dsut}. Conversely, if there is no channel open (the buffer is empty) and an input operating on a channel is sent, the {\dmapper} responds by \textsc{ch\_none}, again, without querying the {\dsut}. Additionally, a channel opened on the {\dmapper} is closed and removed from the buffer on a {ch\_close} from the {\dlearner}, with a corresponding SSH CHANNEL CLOSE message being sent to the {\dsut}. We use Property~\label{prop:channel} to describe this mapper induced behavior.
\begin{property}%[h]
......@@ -109,7 +109,7 @@ G (inp=CH_OPEN & out!=CH_MAX -> (inp=CH_OPEN -> out=CH_MAX)
\subsection{Layer-critical security properties}
In SSH, upper layer services assume some notions of security, notions which are ensured by mechanisms in the lower layers. These mechanisms should have to be first engaged for the respective upper layer services to become available. As an example, authentication and connection services should only become available after exchanging and employing of kryptographic keys (key exchange) was done in the Transport layer, otherwise these services would be running over an unencrypted channel. Requests for these services should therefore not succeed unless key exchange was performed successfully.
Key exchange implies three steps which have to be performed in order but may be interleaved by other actions. Successful authentication or connection service requests should necessarily imply successful execution of the key exchange steps. We can tell a service request or a key exchange step were successful from the values of the input and output variables. For example, an authentication service request is successful if for the input \textsc{sr\_auth}, the output \textsc{sr\_accept} was received. One would expect that a connection service request would in a similar way be successful if on sending \textsc{sr\_conn}, we would also receive the output \textsc{sr\_accept}. Unfortunately, as discussed in the previous section, this is never the case, so we choose successfully opening a channel as way to signal that connection services were engaged. This is suggested by receiving a \textsc{ch\_open\_success} output to a \textsc{ch\_open} input.
Key exchange implies three steps which have to be performed in order but may be interleaved by other actions. Successful authentication or connection service requests should necessarily imply successful execution of the key exchange steps. We can tell a service request or a key exchange step were successful from the values of the input and output variables. For example, an authentication service request is successful if for the input \textsc{sr\_auth}, the output \textsc{sr\_accept} was received. One would expect that a connection service request would in a similar way be successful if on sending \textsc{sr\_conn}, we would also receive the output \textsc{sr\_accept}. Unfortunately, as discussed in the previous section, behavior on \textsc{sr\_conn} seems inconsistent, so in its place, we use opening a channel as means of knowing that connection Layer services were available. This is suggested by receiving a \textsc{ch\_open\_success} output to a \textsc{ch\_open} input.
Following these principles, we define the LTL specification in Property~\ref{prop:sec-trans}, where O stands for the Once-operator. This operator is a past time LTL operator which holds if its argument holds in a past time instant.
......@@ -188,8 +188,7 @@ G ( (out=KEXINIT) ->
\label{prop:trans-kexinit}
\end{property}
On the same page, it is stated that if the server rejects the service request, ``it SHOULD send an appropriate SSH\_MSG\_DISCONNECT message and MUST disconnect''. Moreover, in case it supports the service request, it MUST send a \textsc{sr\_accept} message. Unfortunately, it is not evident from the specification if rejection and support are the only allowed outcomes. We assume that is the case, and formalize an LTL formula accordingly by Property~\ref{prop:trans-sr}. For any service request, in case connection wasn't lost (\textsc{no\_conn}) or we are not in the initial state, the response will either be
an accept(\textsc{sr\_accept}) or a disconnect(\textsc{disconnect}) which loses the connection. We adjusted the property for the initial state since all implementations responded with \textsc{kexinit} which would easily break the property. We cannot yet explain this behavior.
On the same page, it is stated that if the server rejects the service request, ``it SHOULD send an appropriate SSH\_MSG\_DISCONNECT message and MUST disconnect''. Moreover, in case it supports the service request, it MUST send a \textsc{sr\_accept} message. Unfortunately, it is not evident from the specification if rejection and support are the only allowed outcomes. We assume that is the case, and formalize an LTL formula accordingly by Property~\ref{prop:trans-sr}. For any service request, in case connection wasn't lost (\textsc{no\_conn}) or we are not in the initial state, the response will either be an accept(\textsc{sr\_accept}) or a disconnect(\textsc{disconnect}) which loses the connection in the next step. We adjusted the property for the initial state since all implementations responded with \textsc{kexinit} which would easily break the property. We cannot yet explain this behavior.
% , with the adjustment that we also allow the mapper induced output \textsc{no\_conn}, which suggests that connection was lost. Additionally, we exclude the initial state from the implication, as it is the only state where a \textsc{kexinit} is generated as a response, which seems to be the default behavior for all implementations.
\begin{property}[h]
......
......@@ -204,11 +204,10 @@ machine learning algorithms},
@article{FutoranskyAttack,
author = {Futoransky, Ariel and Kargieman, Emiliano},
%citeulike-article-id = {13837770},
citeulike-article-id = {13837770},
edition = {oct. 1998},
year={1998},
howpublished = {Online. \url{https://www.coresecurity.com/system/files/publications/2016/05/KargiemanPacettiRicharte_1998-CRC32.pdf}},
%posted-at = {2016-05},
priority = {2},
title = {An attack on {CRC}-32 integrity checks of encrypted channels using {CBC} and {CFB} modes}
}
......
......@@ -9,10 +9,10 @@ SSHv2 follows a client-server paradigm consisting of three components (Figure~\r
\item The \textit{connection protocol} (RFC 4254~\cite{rfc4254}) allows the encrypted channel to be multiplexed in different channels. These channels enable a user to run multiple processes, such as terminal emulation or file transfer, over a single SSH connection.
\end{itemize}
Each layer has its own specific messages. The SSH protocol is interesting in that outer layers do not encapsulate inner layers. This means that different layers can interact. Hence, it makes sense to learn SSH as a whole, instead of analyzing its constituent layers independently. We review each layer, outlining the relevant messages which are later used in learning. We refer to as \textit{happy flow},
Each layer has its own specific messages. The SSH protocol is interesting in that outer layers do not encapsulate inner layers. This means that different layers can interact. Hence, it makes sense to analyze SSH as a whole, instead of analyzing its constituent layers independently. We review each layer, outlining the relevant messages which are later used in learning. We refer to as \textit{happy flow},
the common case of a successful operation.
The SSH protocol happy flow at a high level consists of the following steps: (1) the client first establishes a TCP connection with the server, (2) the two sides negotiate encryption algorithms, and subsequently establish one-time session keys via a negotiated key exchange method, further communication is secured using these keys, (3) the client authenticates to the server by providing valid credentials and finally, (4) the client accesses services on the server like for example the terminal service. Leaving out the first step, each step that follows is an integral part of each SSH layers.
The SSH protocol happy flow at a high level consists of the following steps: (1) the client first establishes a TCP connection with the server, (2) the two sides negotiate encryption algorithms, and subsequently establish one-time session keys via a negotiated key exchange method, further communication is secured using these keys, (3) the client authenticates to the server by providing valid credentials and finally, (4) the client accesses services on the server like for example the terminal service. Leaving out the first step, each step that follows is an integral part of each SSH layer.
%perhaps room for figure
......@@ -34,13 +34,13 @@ SSH runs over TCP, and provides end-to-end encryption based on pseudo-random key
\label{fig:hf-trans}
\end{figure}
\textsl{Key re-exchange}~\cite[p. 23]{rfc4253}, or \textsl{rekey}, is a near identical process, with the difference being that instead of taking place at the beginning, it takes place once session keys are already in place. The purpose is to renew session keys so as to foil potential replay attacks~\cite[p. 17]{rfc4251}. It follows the same steps as key exchange. Messages exchanged as part of it are encrypted using the old set of keys, messages exchanged afterward are encrypted using the new keys. A key property of rekey is that it should preserve the state. That is, the state of the protocol before a rekey should be the same as afterward, with the difference that new keys are in use. %Some implementations are known not support rekey in certain states of the protocol.
\textsl{Key re-exchange}~\cite[p. 23]{rfc4253}, or \textsl{rekey}, is a near identical process, with the difference being that instead of taking place at the beginning, it takes place once session keys are already in place. The purpose is to renew session keys so as to foil potential replay attacks~\cite[p. 17]{rfc4251}. It follows the same steps as key exchange. Messages exchanged as part of it are encrypted using the old set of keys, messages exchanged afterward are encrypted using the new keys. A key property of rekey is that it should preserve the state. That is, the state of the protocol before a rekey should be the same as afterward, with the difference that new keys are now in use. %Some implementations are known not support rekey in certain states of the protocol.
%We consider an transport layer state machine secure if there is no path from the initial state to the point where the authentication service is invoked without exchanging and employing cryptographic keys.
\subsection{Authentication layer}\label{ssh-run-auth}
Once the authentication service has been summoned, authentication can commence. To this end, RFC 4252~\cite{rfc4252} defines four authentication methods (password, public-key, host-based and none). The authentication request includes a user name, service name and authentication data, which includes both the authentication method as well as the data needed to perform the actual authentication, such the password or public key. The happy flow for this layer is defined as the sequence that results in a successful authentication. The messages \textsc{ua\_pw\_ok} and \textsc{ua\_pk\_ok} achieve this for respectively password or public key authentication. Figure~\ref{fig:hf-auth} presents the case involving password authentication.
Once the authentication service has been summoned, authentication can commence. To this end, RFC 4252~\cite{rfc4252} defines four authentication methods (password, public-key, host-based and none). The authentication request includes a user name, service name and authentication data, which consists of both the authentication method as well as the data needed to perform the actual authentication, such the password or public key. The happy flow for this layer, as shown in Figure~\ref{fig:hf-auth}, is defined as the sequence that results in a successful authentication. The messages \textsc{ua\_pw\_ok} and \textsc{ua\_pk\_ok} achieve this for respectively password or public key authentication. Figure~\ref{fig:hf-auth} presents the case involving password authentication.
%We consider a user authentication layer state machine secure if there is no path from the unauthenticated state to the authenticated state without providing correct credentials.
\begin{figure}[!ht]
......@@ -52,7 +52,7 @@ Once the authentication service has been summoned, authentication can commence.
\subsection{Connection layer}\label{ssh-run-conn}
Successful authentication makes services of the Connection layer available. The Connection layer enables the user to open and close channels of different types, with each type providing access to specific services. From the various services available, we focus on the remote terminal over a session channel, a standout feature of SSH. The happy flow involves opening a session channel, \textsc{ch\_open}, requesting a ``pseudo terminal'' \textsc{ch\_request\_pty}, sending and managing data via the messages \textsc{ch\_send\_data}, \textsc{ch\_adj\_win}, \textsc{ch\_send\_eof}, and eventually closing the channel via \textsc{ch\_close}. Figure~\ref{fig:hf-conn} describes the aforementioned sequence.
Successful authentication makes services of the Connection layer available. The Connection layer enables the user to open and close channels of different types, with each type providing access to specific services. From the various services available, we focus on the remote terminal over a session channel, a standout feature of SSH. The happy flow involves opening a session channel, \textsc{ch\_open}, requesting a ``pseudo terminal'' \textsc{ch\_request\_pty}, sending and managing data via the messages \textsc{ch\_send\_data}, \textsc{ch\_window\_adjust}, \textsc{ch\_send\_eof}, and eventually closing the channel via \textsc{ch\_close}. Figure~\ref{fig:hf-conn} depicts the common scenario.
%Because the connection protocol offers a wide range of functionalities, we it is hard to define a single happy flow. Requesting a terminal is one of the main features of SSH and has therefore been selected as the happy flow. This behaviour is typically triggered by the trace \textsc{ch\_open}; \textsc{ch\_request\_pty}. Other
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment