Multidimensional key generation of ICMetrics for cloud computing
 Bin Ye^{1}Email author,
 Gareth Howells^{1},
 Mustafa Haciosman^{1} and
 Frank Wang^{2}
DOI: 10.1186/s1367701500446
© Ye et al. 2015
Received: 11 April 2015
Accepted: 2 October 2015
Published: 15 October 2015
Abstract
Despite the rapid expansion and uptake of cloud based services, lack of trust in the provenance of such services represents a significant inhibiting factor in the further expansion of such service. This paper explores an approach to assure trust and provenance in cloud based services via the generation of digital signatures using properties or features derived from their own construction and software behaviour. The resulting system removes the need for a server to store a private key in a typical Public/PrivateKey Infrastructure for data sources. Rather, keys are generated at runtime by features obtained as service execution proceeds. In this paper we investigate several potential software features for suitability during the employment of a cloud service identification system. The generation of stable and unique digital identity from features in Cloud computing is challenging because of the unstable operation environments that implies the features employed are likely to vary under normal operating conditions. To address this, we introduce a multidimensional key generation technology which maps from multidimensional feature space directly to a key space. Subsequently, a smooth entropy algorithm is developed to evaluate the entropy of key space.
Keywords
Cryptography Cloud computing Security Key generation ICMetrics Multidimensional spaceIntroduction
Along with the popularity of Cloud computing, the majority of mediumto small sizedcompanies start to deploy their servers on the third party Cloud computing service providers in order to reduce the costs. However, due to the low boundary of technologies of setting up on Cloud servers, many malware or phishing sites can manipulate server using such vulnerabilities. For this reason, to the research topic of quickly identifying such web servers and protecting the customers’ private data is becoming an important research area. This paper introduces a technique termed ICMetrics that could be used to protect services located in a Cloud server, a digital signature could be generated using properties or features derived from the server’s own construction and behaviour [9]. The ICMetrics is capable of assuring both their authenticity and freedom from malware, which means that it simultaneously allows the flexibility of the system to be operated within their designed specification and executionon an arbitrary platform. Generally, there are several major advantages to use the ICMetrics:. Firstly, there is no need to store encryption keys or device characteristic templates, because a digital signature is only regenerated when required. For this reason, there is no physical record of the key, which means that it is not possible to compromise the security of sensitive data through the unauthorised access to the keys. Furthermore, if a system is compromised, that would not disclose the sensitive template data or keys. As a result of that, it would not allow the unauthorised access to other systems protected by the same ICMetrics or indeed any system protected by any other ICMetrics templates present. In addition, another advantage of using the ICMetrics is that a faulty or maliciously tampered device would be autonomously prevented from decrypting its own data or participating in any initiated secure communications. This is because tampering with the constitution of software would cause its behaviour to be changed, and this would potentially cause the generated ICMetrics features to be changed sequentially. As a result of that, any regenerated keys would differ from the original ones that were generated before the software’s integrity was compromised.
Currently, there are two research directions involved in the ICMetrics. Firstly, what kind of software behaviours can be used to identify servers in Cloud environment? In order to address this question, our previous works were investigated this problem in [15], [18] and [1], we used memory usage and PC counter as features to identify devices. Such hardware features are unique and stable that could be used for the ICMetrics technology. However, Cloud computing environments are completely different from traditional server providers. In Cloud computing Cluster, a number of computers are managed by software which dynamically distributes workload to all of the computers. As a result of that, there is a difficulty to obtain the locations where a program will be executed in the clusters. Therefore, only software features could be used in such scenarios. In this paper, we particularly investigate fundamental principles of feature selection within Cloud computing environment and analyse the methodology of adopting features for stable key generation process.
The goal of this paper is to evaluate the feasibility of generating encryption key based on features derived from the properties and behaviour of Cloud servers’. In order to achieve this, we first investigate appropriate methodologies of extracting features from Cloud servers and explore potential features that are suitable for key generation. Then, we invented and evaluated a new multidimensional encryption key generation algorithm after evaluated several possible existing encryption key generation algorithm. Finally, we developed a smooth entropy calculation algorithm which is used to calculate actual uncertainty of our encryption key.
Related works
This section shows an overview of the previous work of software security and Biometrics encryption key generation. As previously discussed,, Security on Cloud servers are increasingly of concern to users. There are two issues that users need to guarantee when they connect to Cloud servers: they should be able to detect the cloud server is not a fraud or spoof and the server is not compromised. To satisfy those two criterions, [8] proposed a malware detection system of AV software by matching automatically generated behaviour’s models against the runtime behaviour of unknown programs. Also, Rahmatian et al. [12] used a CFG to detect intrusion for secured embedded system by detecting behavioural differences between the target system and malware. In their system, each executing process is associated with a finite state machine (FSM) that recognizes the sequences of system calls generated by the correct program. Attacks will be detected if the system call sequence deviates from the known sequence. Wang et al. [11] proposed a system call dependency graph birthmark software theft detection system. Software birthmarks have been defined as unique characteristics that a program possesses and can be used to identify the program without the need for source code, a dynamic analysis tool is used in [19] to generate system call trace and SCDGS to detect software component theft.
For the proposed system, the intended solution should not store a private key in a typical PublicKey Infrastructure for data authenticating. Instead, it should generate a private key at runtime using features extracted from servers dynamically. The keys are generated from these features (soft behaviours data) using feature maps generated from them. It is intended for the system to always generate the same publicprivate key pair for a service’s operation, regardless of input parameters. Previous work in biometricbased or ICMetricsbased system has already investigated the similar nontemplate key generation technologies. For instance, [4] proposed a multidimensional key generation strategy on hand written features. In this paper, they firstly collect the biometric features of the authentic users as the training data, a userdependent feature transform is derived such that the transformed features are distinguishable from those of imposters. Secondly, a stable key generation mechanism is utilized to generate a stable cryptographic key. The key is generated by cascading every bit pattern of each feature. Also, [13] described a different key composing strategy where each biometric feature is conceptually contributed to one bit of the cryptographic key. Similarly, Jermyn [7] proposed a handdrawn sketch to generate passwords according to the position and timing of the sketch drawing. However, these technologies all rely on feature transformations which transform features from original state to a form which can be distinguished from other data, subsequently generating cryptographic keys based on the transformation of the features. Since in Cloud computing environment, features are very easy to overlap and highly changeable. So, in order to make the ICMetrics system feasible in the Cloud environment, we need to incorporate as many features as possible. For this reason, we propose a multidimensional key generation scheme that would generate a key from multidimensional space directly. All features come from the same destination are managed to an entire multidimensional space, mapped to a key vector and then form a digital signature. Finally, a Shamir secret sharing scheme [3] and ReedSolomon code [10] are used to improve the robustness and the performance of the system.
Criteria
The criteria that we adopted for the encryption key generation in Cloud environments corporates in both security and usability requirements. The usability requirement is that the key generation of servers in a Cloud environment is successfully performed and generates the server independent derived keys. In our scenario, the keys are generated from the features gathered in the server and the operating system. For this reason, any changes of either the server itself or the operating system should affect the derived keys. For instance, if anybody tries to access the system or system is infected by virus will definitely change system behaviour’s data. All those operations would change the server behaviour or system behaviour during the key generation. However, in some cases, the server may upgrade to a new version, as a result of this, it may lead system to perform a key updating process as well to retrain the key generation algorithm. But, any changes of the server and system environment should not affect the derived keys. In other words, the derived key should have a property with low intrasample variance (i.e. the values produced for the same device) but high intersample variance (i.e. the values produced for the different devices) with an ideal case being no intersample overlap of potential features.
As mentioned previously, our security goals are to resist the cryptanalysis carried out by the attacker who wants to access server system. This case assures that any attacker accesses the system will tamper the system key generation algorithm. In order to do that, they need to alter the original program or run the malicious codes, which will certainly change the original threads sequence, and consume memory or alter the CPU usage. As a result of that, the feature extraction process during key generation will be affected. In addition, if any attackers who capture the server and wish to perform a brute force attack, we have explored an entropy extraction strategy that extracts actual uncertainty of the derived key and make sure it is greater than 2^{80}.
Feature extraction
In order to collect data, we have setup 9 different Cloud servers and simulated customer usage remotely.
Examples of Cloud server behaviour algorithms
Bubble sort  
merge sort  
Cocktail sort  
BMH search  
RabinKarp search  
Gaussian classifier  
Neural network classifier  
Sieve Atkin (an algorithm for finding all prime number)  
Sieve Eratosthenes (a different algorithm for finding all prime number in a specific range) 
Under normal circumstances of infrastructure as a service (IAAS), our Cloud servers are deployed on a virtual machine with a Linux relative system. So, two approaches have been identified that will allow features to be extracted at runtime from the Cloud servers. In this paper, they have been termed “black box” and “white box”. A Cloud computer cluster is setup by Eucalyptus [14] and Xen [2]. Then, nine virtual machines are setup and each of them is installed on an Ubuntu Server 14.04.2 LTS. A traditional LAMP (Linux Apache, MySQL, PHP) web server is setup in each virtual operating system. The features are collected by a statistic module every 2 s from both black box and white box. This means that both black box and white box are monitoring the events which have been registered by our system.
Monitored Linux kernel functions
sys_munmap  hrtimer_expire_entry  sys_lseek  sys_poll 
sys_recvmsg  sys_fcntl  kmem_cache_alloc_node  sched_stat_runtime 
sys_wait4  sched_stat_iowait  sched_wakeup  power_end 
timer_expire_entry  block_unplug  sys_fadvise64  sys_close 
mm_page_pcpu_drain  sched_migrate_task  sys_rt_sigprocmask  softirq_exit 
sys_futex  sys_mmap  sys_read  sys_select 
power_start  sys_newfstat  kmem_cache_free  sys_newstat 
sys_writev  sys_splice  sys_ioctl  sched_stat_sleep 
irq_handler_exit  sys_open  irq_handler_entry  hrtimer_init 
sys_write  exit_syscall  mm_page_alloc_zone_locked  softirq_entry 
workqueue_execute_start  rcu_utilization  sys_setitimer  sys_eventfd2 
mm_page_alloc  softirq_raise  timer_expire_exit  sched_stat_wait 
sys_recvfrom  sched_switch  workqueue_activate_work  block_rq_issue 
hrtimer_cancel  sys_sync_file_range  hrtimer_start  scsi_dispatch_cmd_start 
hrtimer_expire_exit  block_bio_remap  sys_sendmsg  kmem_cache_alloc 
mm_page_free_batched  workqueue_queue_work 

A histogram of how many objects are allocated in each method.

A histogram of how many input arguments are passed to each method.

A histogram of how many conditional statements are encountered in each method.

A histogram of the methods called from within each method.

A histogram of the invariant loops encountered within each method.

A histogram of the accesses to external resources

A histogram of comparisons for loop invariants per method.

A histogram of number of loops, not iterations, encountered within each method.

A histogram of method call depths.
The framework

Calibration phase:
 1.
For each sample device or system: measure the desired feature values.
 2.
Generate feature distributions describing the frequency of occurrence of discrete value for each sample system.
 3.
Normalize the feature distributions and generate normalization maps for each feature.
 1.

Operation phase:
 1.
Measure desired systems’ features.
 2.
Apply the normalization maps to generate values suitable for key generation.
 3.
Apply the key generation algorithm.
 1.
In the calibration stage, certain features, which are described from previous section have been identified as ∅ _{1}, ∅ _{2}, . . . of raw are extracted. Then, the data is forwarded to quantise and normalise process. Finally, a multidimensional normalisation map is generated based on normalised data. In the operation phrase, a measured data is mapped to multidimensional normalisation map to form a unique basis number. Finally, the unique basis number is forward to generate encryption key.
Feature quantization and normalization

Step1: Define a virtual model for all features. The virtual model is a virtual probability distribution for a particular feature with no overlap. For instance, Fig. 4 has three servers in total; the distribution is then quartered into 3 blocks. Then, the values are mapped from real distribution to virtual model within their owner blocks. However, in practise, it is not fair to split a virtual model into 3 equalsized blocks. The blocks should be splinted based on the number of quantised intervals.

Step 2: Repeat for each Cloud server N

◦ Map each feature value of Cloud server to virtual model within specific range, which is defined in step1.

◦ Each feature value is mapped by descending order. A pseudoGaussian is produced by alternating below and above the mean.

Feature value of three servers
Location  Value  Frequency  

S1F8V1  1  962  0.009 
S1F8V2  2  1946  0.985 
S1F8V3  3  2931  0.002 
S1F8V4  4  3915  0.003 
S1F8V5  5  4899  0.001 
S2F8V1  3  2931  0.013 
S2F8V2  4  3915  0.986 
S2F8V3  6  5884  0.001 
S3F8V1  14  13,759  0.009 
S3F8V2  15  14,744  0.985 
S3F8V3  16  15,728  0.002 
S3F8V4  17  16,713  0.003 
S3F8V5  19  18,682  0.001 
S3F8V6  20  19,666  0.001 
According to the algorithm above, in step1, a virtual feature distribution (virtual model) is defined based on the number of values. In our example, we can see that server 1 has 5 values, server 2 has 3 values and server 3 has 6 values. So the virtual model is defined as the proportion by 5/14, 3/14, 6/14. This means server 1 has 3 intercepts, server 2 has 3 intercepts and server 3 has 6 intercepts in our virtual model.
Feature values after multilevel mapping
Location  Origin location  Feature values  

S1F8  1  3  2931 
S1F8  2  1  962 
S1F8  3  2  1946 
S1F8  4  4  3915 
S1F8  5  5  4899 
S2F8  6  3  2931 
S2F8  7  4  3915 
S2F8  8  6  5884 
S3F8  9  19  18,682 
S3F8  10  16  15,728 
S3F8  11  14  13,759 
S3F8  12  15  14,744 
S3F8  13  17  16,713 
S3F8  14  20  19,666 
Multidimensional normalization map
After quantization and normalisation, our goal now is to generate a multidimensional normalization map – a multidimensional normalization map is a multidimensional feature value space. A normalization map is used to generate a unique basis number. In our previous work [17], normalization maps are linear based, mapping each individual feature to a vector and concatenating them together. This method may be fairly easy to implement but causes key generation unstable. The fundamental idea of the multidimensional normalization map is that to map a measured series of feature data into a multidimensional space and to determine whether the data is located in that range. The higher dimension may increase the probability of data appear in that range. This is because even there are some features which have exceed the correct range, the majority of other features are still well qualified to make right decision. The higher dimensional feature space increases the entropy of digital identity. In other words, for an attacker, the difficulty to decrypt the key is increasing exponentially. In order to determine a specific range in the space, we also developed a multidimensional space definer algorithm, which is used to define a multidimensional space. Basically, the algorithm adopts normal pattern recognition techniques to detect which part it belongs to. Then, the measured data will be guided to that specific space. Finally, a unique basis number generation algorithm is deployed to produce a basis number based on measured data itself and the specific multidimensional space range. An example of space definer algorithm is described below:
In the function above, ‘a’ is a start point of the quantization which we have defined in the previous step and ‘b’ is an end point of the quantization. i represents severs number, which we have 9 servers in our experience. P _{ i } represents the i _{ th } server in our experiment. The function results a probability of x belongs all 9 servers. The system should now guide to map x to the range with highest probability.
Multidimensional key space generation
 (1)
Define left and right boundaries for all features in all servers (define feature range).
 (2)
Generate feature value segments identifier number for every feature. The globe feature distribution is segmented by the intercept of feature i of server 1. We calculate how many segments are there in the left of feature i of server 1:
In the above equations SL is the number of intercepts on the left of our test data and SR is the number of intercepts on the left of our test data. S is the total number of intercepts in this feature distribution. In practical, we add one more intercept at left and another intercept at right, which you can see from Fig. 6. Now, we use binary bits of grey code to index each segment of the distribution. The indexes start from 0 and end with binary number 1000.
Generally, \( lo{g}_2\left(\frac{k_i{\delta}_i}{k_g{\delta}_g}+2\right) \) bits are sufficient to index each segment with a unique binary number. In Fig. 6, there are 9 segments covering global feature distribution. log _{2}(9) bits are employed to identify each intercept.
Unique basis number correction
In order to improve the stability of unique basis number, we investigated two error correction schemes. The ReedSolomon code [10] and Shamir secret sharing [3]. According to our test data, the unique basis number is an 800 bits long binary number. To adapt ReedSolomon code, we chop down it into 160 pieces. Each piece message contains 5 bits. We choose RS(7,5) and set a Galois field GF(2^{ m }), m = 3. The generator polynomial is P(x) = x ^{3} + x + 1. The above scenario allows correcting 1bit every 5 bits.
Experiment and result
In this section, we first evaluate hamming distance of our unique basis numbers to the corresponding servers. Then, we evaluated our error correction algorithm with false negative rate. Finally, we developed a smooth entropy evaluation system under appropriate circumstance.
Hamming distance VS k _{ i }
False negative rate vs k _{ i }
False negative rate vs RS codes
Entropy
Fundamental to our empirical evaluation is the measure of software behaviour entropy we chose. As described above, the software behaviour are used to form a unique basis number. Intuitively, our measure of entropy should capture the amount of remaining uncertainty in the unique basis number. Initially, our unique basis number is 800 bits long, if an attacker knows nothing about the system, then, the entropy is simply 2^{800}. In fact, the bits in unique basis number are not independent. There are two aspects we should consider in the section, error correction and uniform random bits of unique basis number. Firstly, we need to calculate how many uniformly random bits can be extracted from the feature set. This is the process of converting an arbitrary random source into a source with smaller alphabet and almost uniform distribution. Secondly, error correcting process is applied to improve the robustness; we have to eliminate correct bits as well. In order to capture realistic entropy, assuming in the feature X with probability value of χ and probability distribution P _{ X }. We apply entropy smoothing function ƒ:χ → y to X such that Y = f (X) is uniformly distributed over its range y. The largest Y such that Y is still sufficiently uniform is a measure for the amount of smooth entropy inherent in X or extractable from X, relative to the allowed deviation from perfect uniformity. In an alternative view, this process eliminates almost all partial information available about X and concentrates as much as possible of the uncertainty of X in Y.
The algorithm includes two steps. At first step, using BIN PACKING algorithm group the bits’ probabilities to provide most equal sized block (each block has similar probability in summation). The bin packing is an algorithm of deciding whether n positive integers can be partitioned into b subsets such that the sum of every subset is at most c. Bin packing is a NPcomplete problem and the smaller alphabet of output, the more uniform is its distribution.
Secondly, the algorithm then, repeatedly check whether the answer is the best by comparing divergence and L1 distance. An example is explained below:
A random value distribution
x  a  b  c  d  e  F  g  h  i 

P _{ X }(x)  0.08  0.05  0.06  0.16  0.19  0.21  0.19  0.02  0.04 

Step 1: We want to produce an almost uniform distribution Y from a random variable X.

Step 2: Calculate relative entropy and L1 distance of groups in Table 6.Table 6
The table shows how the random variable x with nine values in Table 5 can be converted to more uniform random variables
s
y
1
2
3
4
5
5
\( {\boldsymbol{P}}_{{\boldsymbol{Y}}_5}\left(\boldsymbol{y}\right) \)
0.21
0.21
0.20
0.19
0.19
f _{5}(y)
f
b,d
a,c,h,i
e
g
4
\( {\boldsymbol{P}}_{{\boldsymbol{Y}}_4}\left(\boldsymbol{y}\right) \)
0.27
0.25
0.25
0.23
f _{4}(y)
e,a
f,i
g,c
b,d,h
3
\( {\boldsymbol{P}}_{{\boldsymbol{Y}}_{\boldsymbol{3}}}\left(\boldsymbol{y}\right) \)
0.35
0.33
0.32
f _{3}(y)
e,d
f,c,h,i
g,a,b
2
\( {\boldsymbol{P}}_{{\boldsymbol{Y}}_3}\left(\boldsymbol{y}\right) \)
0.5
0.5
f _{3}(y)
F,e,c,i
g,d,a,b,h
Relative entropy and L1 distance of every group in Table 6
s  5  4  3  2 

\( \boldsymbol{D}\left({\boldsymbol{P}}_{{\boldsymbol{Y}}_{\boldsymbol{s}}}\left\right{\boldsymbol{P}}_{\boldsymbol{U}}\right) \)  0.001  0.0016  0.0007  0 
\( \left\right{\boldsymbol{P}}_{{\boldsymbol{Y}}_{\boldsymbol{s}}}{\boldsymbol{P}}_{\boldsymbol{U}}\left\right{}_1 \)  0.04  0.04  0.03  0 
Secondly, error correction we mentioned in previous section chop down the unique basis number into N/5 pieces. N is the length of unique basis number. So the total number of bits that is going to correct is N/5 as RS(7,5) only correct 1 bit of 5. Although, the RS(7,5) scheme doesn’t correct every time but it does release some threads of the unique basis number. So, we eliminate one segment of every 5 segments. In practical, a perfect uniform distribution does not exist. The measure of the nonuniformity are always selected a minimum value. So we develop the following formula:
Let M be a no uniformity measure. A family X of random variables with m dimensions with alphabet χ has smooth entropy Ψ (X). A function ƒ:χ → y, φ (X) = 2^{y} with condition when M(Y) is minimum. E is the number of bits is corrected in previous process. Formally, \( \varPsi (X)={2}^{{\displaystyle {\sum}_{m\ge 1}^m}\left\{\lefty\right\Bigf:x\to \mathrm{y}:\mathrm{M}{\left(\mathrm{Y}\right)}_{min}\right\}E} \)
To calculate the entropy, we set k = 130 and RS code equal to RS(7,5). The entropy of multidimensional space we can report is 2^{90}.
Conclusion
In this paper, we investigated a reliable cryptographic key generation algorithm which extracts Cloud servers’ behaviours as features to form a digital signature for Cloud environment. Firstly, in order to extract Cloud server behaviours, we developed two strategies, which have been termed a black box and a white box. The black box is responsible for behaviours outside the Cloud servers, while the white box is designed for exploring the internal behaviours of Cloud servers. In total, there are 60 features have been collected and evaluated for our system. Then, a multilevel mapping algorithm is used to transfer unusual distribution into a traditional Gaussian form. Next, a multidimensional normalization map generation algorithm is programed to generate a multidimensional normalisation map. After that, a multidimensional binary key mapping algorithm is developed to map a measured data from multidimensional space to a key vector. Next, a reedSolomon error correction algorithm is adopted to improve stability of binary key. Finally, an entropy smoothing algorithm is then explored based on Bin Packing algorithm incorporate with relative entropy and L1 distance.
Our result indicates that the usability of ICMetrics technology in Cloud environment is satisfied. The false negative rate we can report is around 0.003 on average of 9 servers. Although, server 6, 7 and 8 have higher false negative rate than others, but they are still feasible for the system. In comparison with our previous research, the false negative rate have been reduce twice times. The entropy of the unique basis number we can report is at least 2^{90}, which has satisfied the current standard of cryptographic system.
Declarations
Acknowledgements
The authors gratefully acknowledge the support of the EU Interreg IV A 2 Mers Seas Zeeën Crossborder Cooperation Programme – SYSIASS project: Autonomous and Intelligent Healthcare System (project’s website http://www.sysiass.eu/).
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Appiah, K, Zhai, X, Ehsan, S, Cheung, WM, Hu, H, Gu, D, Howells, G (2013) Program Counter as an Integrated Circuit Metrics for Secured Program Identification. In Emerging Security Technologies (EST), fourth international conference on, IEEE, Cambridge, p98–101
 Barham, P, Dragovic, B, Fraser, K, Hand, S, Harris, T, Ho, A, Warfield, A (2003) Xen and the art of virtualization. ACM SIGOPS Operating Systems Review. doi:10.1145/1165389.945462
 Benaloh, J (1987) Secret sharing homomorphisms: Keeping shares of a secret secret. In Advances in Cryptology — CRYPTO’ 86, pp. 251–260. doi:10.1016/S13613723(05)00151X
 Chang, YJ, Zhang, W, Chen, T (2004) Biometricbased cryptographic key generation. In. Multimedia and Expo, ICME '04. 2004 IEEE International Conference on , vol.3, no., pp.22032206
 Desnoyers, M, Dagenais, M (2008) LTTng: Tracing across execution layers, from the Hypervisor to userspace. In: Linux Symposium (Vol. 101)
 Howells, G, Papoutsis, E, Hopkins, A, McDonaldMaier, K (2007) Normalizing Discrete Circuit Features with Statistically Independent values for incorporation within a highly Secure Encryption System. In Adaptive Hardware and Systems, 2007. AHS 2007. Second NASA/ESA Conference on, EEE, Edinburgh, pp.97–102)
 Jermyn, I, Mayer, A, Monrose, F, Reiter, MK, Rubin, AD (1999) The Design and Analysis of Graphical Passwords. In Proceedings of the 8th conference on USENIX Security Symposium (SSYM'99), Vol. 8. USENIX Association, Berkeley, CA, USA, 11. USENIX Association, Washington, D.C
 Kolbitsch C, Comparetti PM, Kruegel C, Kirda E, Zhou X, Wang X, Antipolis S (2009) Effective and efficient malware detection at the end host. System 4(1):351–366. doi:10.1093/mp/ssq045 Google Scholar
 Kovalchuk, Y, McDonaldMaier, K, Howells, G (2011) Overview of ICmetrics TechnologySecurity Infrastructure for Autonomous and Intelligent Healthcare System. Int J U& EService, Sci & Tech, 4(3): 4960
 Krachkovsky VY (2003) Reedsolomon codes for correcting phased error bursts. IEEE Transact Info Theory 49:2975–2984. doi:10.1109/TIT.2003.819333 MathSciNetView ArticleMATHGoogle Scholar
 Maier, K D (2003) Onchip debug support for embedded SystemsonChip. Proceedings of the 2003 International Symposium on Circuits and Systems, 2003. ISCAS’03., 5. doi:10.1109/ISCAS.2003.1206375
 Meguerdichian, S, Potkonjak, M (2011) Device agingbased physically unclonable functions. 2011 48th ACM/EDAC/IEEE Design Automation Conference (DAC), 288–289. doi:10.1145/2024724.2024793
 Monrose, F, Reiter, MK, Wetzel, S, Labs, B, Technologies, L, Hill, M (2001) Cryptographic Key Generation from Voice. IEEE, Oakland, CA, pp.202213 doi: 10.1109/SECPRI.2001.924299
 Nurmi, D, Wolski, R, Grzegorczyk, C, Obertelli, G, Soman, S, Youseff, L, Zagorodnov, D (2009). The eucalyptus opensource cloudcomputing system. In 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid, CCGRID 2009, pp. 124–131. doi:10.1109/CCGRID.2009.93
 Papoutsis, E, Howells, G, Hopkins, A, McDonaldMaier, K (2007) Key Generation for Secure Intersatellite Communication. Second NASA/ESA Conference on Adaptive Hardware and Systems (AHS 2007), 671–681. doi:10.1109/AHS.2007.67
 Papoutsis, E, Howells, G, Hopkins, A, McDonaldMaier, K (2007). Normalizing Discrete Circuit Features with Statistically Independent values for incorporation within a highly Secure Encryption System. In Adaptive Hardware and Systems, 2007. AHS 2007. Second NASA/ESA Conference on, pp. 97–102
 Papoutsis, E, Howells, G, Hopkins, A, McDonaldMaier, K (2009). Integrating Feature Values for Key Generation in an ICmetric System. 2009 NASA/ESA Conference on Adaptive Hardware and Systems. doi:10.1109/AHS.2009.30
 Tahir, R, McDonaldMaier, K (2012) An ICMetrics based Lightweight Security Architecture using Lattice Signcryption. In Emerging Security Technologies (EST), 2012 Third International Conference on, IEEE, Lisbon, pp. 135–140
 Wang, X, Jhi, YC, Zhu, S, Liu, P (2009) Behavior based software theft detection. Proceedings of the 16th ACM Conference on Computer and Communications Security  CCS’09, 280. doi:10.1145/1653662.1653696