Preparation and integration of architectural elements necessary for the secure operation of national data asset records and related specialized systems
The advent of broadband internet, the proliferation of devices using ARM architecture that is accessible to anyone (mobile devices, tablets, etc.) and the proximity of the IoT have allowed citizens wide access to online space. These changes have also had a major impact on public administration, with paper-based administration becoming increasingly marginalized and e-government gaining ground. With the growth of e-government, IT systems ‒ both in number and in service ‒ have grown exponentially and are still changing and evolving.
In order to be able to ensure the operation of these systems, it is essential to use Artificial Intelligence (AI) in e-government, aimed to maximize the availability of services, i.e. to minimize and, where possible, avoid errors arising during operation, and to automate operation, i.e. to minimize human intervention.
This subproject can be divided into the following major research topics:
- analysis of data transmission patterns of interoperable systems;
- detection of deviations from a data inconsistency pattern;
- setting up of a security alert system (with notification and intervention protocols).
In order to take advantage of the possibilities provided by AI in e-government, it is essential to centralize the data that provide a snapshot of the operation of the system in traditional (decentralized) monolithic systems (how much computer resources a given service requires, how many people use a given service, etc.). This is because these centrally collected data are the inputs of machine learning algorithms, resulting in a model. With the models built in this way, we can predict the future or detect anomalies (deviations from the norm).
Our aim is to examine various Deep Learning networks and machine learning models considered classic, which can be used to provide effective answers to the problems described above. Due to the nature of the data, our subproject mostly works with multi-variable time series. In order to be able to measure the efficiency of a deep learning network, we created a validation system, i.e. we look at simulated data to see how a particular algorithm performs and compare it to a neural network learned on real data or a classic machine learning model.
The next issue to be addressed in the research is the visualization of the observations revealed by AI and the development of an alert chain with appropriate protocols. To solve a problem detected by AI, we examine how we can use AI to reduce human intervention. One such solution may be introducing a self-learning algorithm, i.e. learning the solutions.