DIPARTIMENTO   DI   INFORMATICA
Università di Torino

Research Report Year 2002

Computer Science

Computer Systems and Networks

  People   Research Activities   Publications   Software Products   Research Grants

Security and Computer Networks

People

People

Last and first name

Position

Email

Bergadano Francesco

Full Professor

bergadan(at)di.unito.it

Sirovich Franco

Full Professor

franco(at)di.unito.it

Gunetti Daniele

Associate Professor

gunetti(at)di.unito.it

Ruffo Giancarlo

Researcher

ruffo(at)di.unito.it

Cavagnino Davide

Researcher

cavagnino(at)di.unito.it

Nesta Andrea

PhD. Student

camerano(at)di.unito.it

Dal Checco Paolo

PhD. Student

dalchecco(at)di.unito.it

 

Research activity in 2002

The group's work in Security and Computer Network started in 1994, with special interest in public key systems as a basis for applications in secure wide area network communications. Since then, ten students have graduated with a thesis in network security, supervised by Prof. Bergadano. This activities continued until the present day, with significant collaboration with the University of Cambridge. This collaboration has included research on such issues as public key certification, innovative digital signature mechanisms, and WWW security. From 1994 until 1996, research activities in the area of secure agent architectures were investigated, in collaboration with Prof. Vita, at the University of Messina. In 1996, activities in computer security were also started, especially in the areas of password checking, intrusion detection and Web Security. Other undergraduate theses are under completion in this area, also supervised by Prof. Bergadano. On the other hand, in the wider area of computer networks, Prof. Sirovich has investigated the ISO/OSI protocols during the past ten years, with special reference to network management and directory services. The activity in 2002 has included a project for the study and the implementation of secure multicast; the development of a system to certify the number of visitors of an Internet site; the development of a protocol to sign large files; the investigation of benchmarking technology; the study of biometric systems based on keystroke analysis to recognize users and reject impostors and to track legal users over remote connections.

 

 

a) Multicast Security

Multicast transmission means sending the same information to a group of receivers. This transmission is performed in an efficient way, with protocols devised for this purpose. Authentication of origin is one of the problems that some applications have, and multicast transmission requires the development of efficient techniques to accomplish this authentication. One protocol for individual authentication was already developed and tested. In this year we continued the investigation on another protocol we developed. This protocol may be used in the context of a set of peers continuosly multicasting information among them (information that needs individual authentication). Some analysis has been performed, to look for possible improvements on the protocol.

b) Large file signature

A paper presenting some protocols for signing large files was presented to an international conference. Applications of these protocols are the certification of large files, also in conditions when the signer is not allowed to see the whole file. Moreover, the protocols developed are efficient, in the sense that do not require the examination of the entire file to be signed.

c) Internet Traffic Certification and Analysis

In some computer environments it is important to control the accesses to a web site, for example for security and/or statistical purposes. It is important that access data is reliable and usable. This requires that accesses are not forged (for example, this implies that IP addresses are not spoofed, and that the client request is logged as sent and not altered) and that the data are presented in a meaningful manner to humans. Some techniques and software are being studied and developed to reach these objectives, taking into account the compatibility with protocols and file formats widely used.

d) Web quality assessment

The rapid growing of Web users number, and the consequent importance of Capacity Planning, have leaded to the development of Web benchmarking tools largely available in the market. One of the most common critics to this approach, is that synthetic workload produced by web stressing tools is far to be realistic. We investigated a benchmarking methodology based on a workload characterization generated from the log files. Customer Behavior Model Graph (CBMG) has been originally proposed by Almeida and Manasce' a workload characterization of an e-Commerce site. We investigated how CBMGs have a wider field of application and how to use this model to efficiently improve a Web Stressing Tool. We also put the basis for future work on the implementation of a fully integrated web stressing tool and its evaluation against other approaches and models based on different characterizations. This research has been conducted in a research laboratory (WTLAB) jointly coordinated by Dipartimento di Informatica and CSP s.ca.r.l. For more information about WTLAB: http://www.wtlab.it/

e) Keystroke Analysis

A patent about a biometric system performing keystroke analysis has been obtained in 2002. The method developed showed to be by far the best one (among those performing keystroke analysis) able to recognize legal users and impostors who try to access a controlled resource. In 2002 the method has been extended to the analysis of free text and thoroughly tested. It has been showed to be the only method found in the literature to be able to effectively discriminate between legal users and intruders of the monitored system, with a very high level of accuracy and transparently to the users (and impostors).

 

 

f) The X.500 protocol Directory (Franco Sirovich)

The X.500 protocol directory allows to realize a sophisticated distributed database with a partial reproduction of the database. Substantially, it is a free scheme with research functions for the database content. In the 1992 standard from ISO and CCITT, the functionality of the X.500 service was extended introducing access control. Basic Access Control functionalities have been studied, and an algorithm was developed to verify at runtime the access control for a generic X.500 database. A new interesting concept has originated from the research and development on Directories: The Concept of MetaDirectory. A MetaDirectory is a controlled union of all the Directories of an Organisation, that on one side allows to have a single point of access to a common repository of data for the whole organisation, and on the other side allows to synchronise and the data contained in different data bases and to control the flow of data from one data base to another that is configured to receive the updates of the data from the "master" one. The interesting point is that the master ship is defined at the attribute level, and the values of attributes can be obtained via an appropriate computation from one or moresource attributes. We are experimenting with these new concepts in the area of controlled access to University data from web servers.

g) Network Management (Franco Sirovich)

With the development of applications on computer networks, the problem of managing complex network systems became more and more important. Both within ISO/ITU and Internet, specific protocols and informative models have been developed to realize a distributed system to handle both network elements and distributed network applications. The two network management models are not equivalent, even if a comparative study of them points out interesting analogies. Cryptographic key management in a security system for telecommunications is an interesting area to apply the management model and the corresponding OSI protocols.

An interesting problem the is now being studied is the monitoring of Service Level Agreements. With the widespread adoption of outsourcing contract for the ITC services, the need has emerged of defining precisely the LEvel of Service that the provider must offer to the Users. The finalisation of document called Service Level Agreement is considered to be extremely beneficial to the achievement of customer satisfaction. The problem is now that of being able to monitor level of service that is being offered to group of Users and to be able to manage the resources of the ITC system so that the level of service defined in the SLA is actually met, thus avoiding that the users perceive a degradation of service.

MOnitoring Service LEvel Agreements is radically different than monitoring the performance of devices, computers, or telecommunication lines, because requires the measure of the service that is being delivered the actual users by applications, and not the service that the application "believes" is delivering to users. A Service Level Agreement Monitoring system must be able to allow the operators to identify the real causes of performance degradation, before the users perceive such a degradation and complain with the administrators.

 

h) Public key systems and certification

This research is concerned with public key certification in distributed environments, proposing a certification scheme to exchange documents that are digitally signed. Our system, proposed in collaboration with the University of Cambridge, includes the implementation of a separate authority dedicated to delete the public key certificates that are no longer valid. Log file records are bound in a chain of hash values, so that they may not be deleted by the authority in an undetected way. The system has been implemented and a series of experiments is starting to integrate the above certification system with a few browser and mail services available on the market, both in Unix and Windows NT environments. Through this structure is possible to obtain a number of secure services that require signed and/or encrypted documents to be sent.

h.1) Secure Mailtools

In this research we studied PGP (Pretty Good Privacy), the most popular mail encryption and signature tool available worldwide. Within the context of two undergraduate theses, that were supervised by the security group, a public domain program, named "ACT", was implemented in 1997 and maintained in 1998, 1999, 2000, 2001 and 2002. This tool is compatible with PGP (versions prior to 5.0), but solves some of its weaknesses.

h.2) Digital Signatures and Authentication

The research was about the study of the mechanism to generate digital signatures based on public key algorithms. In collaboration with the University of Cambridge we made a new and alternative proposal to generate digital signatures based only on hash function chains. This mechanism turns out to be more efficient than the traditional approaches, and moreover it does not involve all the problems connected to the restrictions imposed by some countries about the export of encryption software.

 

 

Department home [Information] [People] [Research] [Ph.D.] [Education] [Library] [Search]
[Bandi/Careers] [HelpDesk] [Administration] [Services] [Hostings] [News and events]

Administrator: wwwadm[at]di.unito.it Last update: May 17, 2018