Posted 7 октября 2021,, 14:39
Published 7 октября 2021,, 14:39
Modified 24 декабря 2022,, 22:36
Updated 24 декабря 2022,, 22:36
It's worth reminding: from October of this year to October 2022, the Ministry of Digital Development, Communications and Mass Media of the Russian Federation is conducting an experiment to place citizens of their biometric data through the Biometrics application. It must be downloaded, and then logged into it using a verified account in the Unified Identification and Authentication System (Unified System of Identification and Authentication).
The experiment, by the way, is being carried out at the suggestion of the FSB of Russia, which previously reported that it was not ready at the moment to support bill No. 946734-7 "On Amendments to the Federal Law" On Information, Information Technologies and Information Protection ", including with the need for a more detailed study of issues related to the technical implementation of the process of placing by individuals their biometric personal data through a mobile application, and the need to form a threat model before the adoption of the bill.
Citizens who have passed the procedure for passing two types of biometrics at once - voice and face using the application - will have access to a wide range of digital services in government agencies, banks, educational institutions and other organizations.
“It’s itching to digitalizers, it’s itching to do it,” they note on the Web in numerous posts and comments on this topic.
In an interview with RIA-Novosti Natalya Kasperskaya, President of the InfoWatch Group of Companies, Chairman of the Board of the Association of Software Developers "Otechestvenny Soft", a public figure, explained her attitude to the delivery of biometrics, which has recently been increasingly imposed on Russians:
"Who is gaining momentum technology called deep fakes - deep fake. On the Internet, you can find a lot of videos with the replacement of the faces of stars, politicians, famous people. But an ordinary person passes under hundreds of cameras every day, his face, gait, voice get into dozens of databases of different levels and different types of property, and from the databases they flow to the left and right.
Tests in this area show depressing results: for example, in a study by South Korean scientists, the biometric authentication systems Microsoft Azure and Amazon recognized fake faces in 68% and 78% of cases, respectively, with a very high degree of confidence.
Audio-fakes are just perfect now, and voice authorization is now being used in real banking applications. Currently, there are no reliable technologies for recognizing such fakes.
Everything is possible in the future, but so far there are no defense systems against deep fake, although now developments in this area are very active. Meanwhile, "deep fake" has already become a "kitchen technology", when any student with basic programming skills can make a convincing video fake that can mislead the masses of viewers, break through the authorization system of a bank or Internet service.
Therefore, when introducing biometrics, the main danger is that it is not yet clear how to protect and verify this data.
Citizens submit fingerprints and photographs of faces, their faces are taken without their knowledge and consent on the streets, in transport, in offices and shopping centers, and then such information can be "leaked", stolen, intercepted and used, for example, in large real estate transactions, when managing a bank account, when entering closed objects and the like.
My personal recommendation: in any case, do not submit biometric data, do not be led for "convenience". They will be stolen, sold, "merged" with almost a guarantee. Let's first explain to us how this data is planned to be protected, including from our employees. At the same time, I personally do not understand why you need to use biometrics at all?
Biometric data, unlike any other used for identification, is an integral part of a person, you have one for life. Unlike the password, phone number and even surname used now, you cannot change your face, retina, shape, fingerprints, or ear shape in the event of a data leak or compromise. That is, this is supersensitive, but at the same time unchanging data, which is enough to steal once and for all.
Now let's think: in order to identify a person, you need to store this data somewhere. Where? Obviously in the database like everything else.
Now let's remember how many leaks from user databases there were over the past year. There were more than 200 such cases in the Russian banking segment alone, and there were 486 million leaked records - personal and payment data.
The number of data leaks in all areas is constantly and steadily growing. And this is not a technological problem: most leaks are due to human factors.
Biometrics apologists assure that databases will be somehow especially protected, cryptographic systems will be introduced.
In fact, there is no special way to protect biometric data - it is ordinary data that is stored in ordinary databases. And the same people work with them. Ordinary sysadmins and IT specialists, who do not receive very high salaries, assign access rights themselves - that is, they can configure themselves access to any level.
The old accounting principle applies here: if there is a possibility of abuse, it must be assumed that they have already taken place.
Indeed, illegal services for "breaking through" people using personal data have existed for a long time, the black market for data is large and rapidly developing. Attackers are very energetic and resourceful, so as soon as a new type of digital content appears, they immediately figure out how to profit from this content. There will be biometric databases - and from them there will be leaks, thefts, sales of data, there is no doubt about it.
And why do we need to simplify the identification of the users? Because "comfortable" and "cool"? Why trade security for indulgence? Is it really that difficult to type a password?
In addition, one must understand that a biometric identification system is an artificial intelligence system that does not have 100% quality. It is always prone to errors of the first and second kind: that is, it can recognize the "wrong" object as correct or not miss the correct object.
That is, there is always the possibility that the system does not recognize you, or recognizes you as someone else whom you resemble.
The quality of face recognition now reaches a maximum of 99% - usually developers boast of indicators in the region of 97-98%, while not explaining what kind of errors are hidden in the remaining 1-3%.
And to the layman it seems that 99% accuracy is very cool.
But what is 99%, for example, in Moscow ? About 16-18 million people live or visit the city during the day, which means that 180 thousand faces of Muscovites may experience an error of the first or second kind - that is, they will not be recognized or confused with someone else.
If we apply biometrics widely, then there will be millions of recognition transactions per day - in transport, in banks, at checkpoints, and so on - that is, there will be hundreds of thousands or millions of errors. Do we need such risks? Therefore, before widely introducing biometrics, it seems to me that you need to think ten times.
... With electronic services, everything is also quite complicated. It is my deep conviction: it is imperative to maintain a parallel paper system. No matter how many years pass, you need to store the paper, just in case of failures in the electronic system.
Backups are made, of course. But, as you rightly noted, electronic document forgery is still possible, for example, as a result of targeted hacking. Moreover, such a fake is simpler and easier to cover up your tracks with it: if an IT specialist with high access rights forges documents, then he will also forge his access logs.
In addition, all formats and data carriers live no more than 15-20 years, for example, now it is a huge problem to read diskettes of the 1990s . Well, the problem of power outages in the event of an emergency, catastrophe, war - does not disappear anywhere.
Therefore, in my opinion, you cannot make an electronic copy as an original. Paper media should still be considered originals and saved as a basic instrument.
Of course, for electronic documents there must be multilevel security systems, cross storage systems, automatic backups; the methods of linked lists should be used, when the document includes its cryptographic fingerprint - a checksum, so that if changes have been made in the document, this can be seen.
All these enthusiasts of electronic document management promise us, of course; but risk factors - the human factor, corruption, constant leaks, unreliable media, and fragility of formats - remain.
You can talk about the problem of artificial intelligence for hours, I will try briefly. There are many different problems.
The first is whether to trust the decisions of artificial intelligence and whether to delegate to the AI the power of autonomous decision making.
Suppose the AI medical system diagnosed you, made a diagnosis, but it turned out to be wrong. Who is guilty? When the diagnosis is made by a "live" doctor, then there is also a probability of error, but then there is a responsible subject. And what is here? No one is to blame for your disability, have we already installed a new version?
Or, for example, let's take an unmanned vehicle. One situation when a person was run over by a live driver is a very sad event, but there is someone who will be held responsible for this.
And another thing is a killer drone, and there have been more than 40 such cases, when unmanned vehicles hit people, over the past three or four years. Who is responsible? System developer? No, he just designed a system. System operator? It is also unlikely, he only uses the result of the developer's work. That is, there is no responsible person, while the legislators do not understand what to do with this. And money - insurance, as they suggest in the West, does not solve the issues of life and death.
The second problem: is it necessary to endow artificial intelligence with subjectivity, to provide it with any rights?
It seems to me that the answer to this question is obvious: of course not. Artificial intelligence is a robot, a slave to man. And if we give him at least some "rights", we will not be able to cope with the possible unpleasant consequences of "dehumanizing" a person.
The third is deliberate harm to a person using AI.
It should be directly postulated that artificial intelligence should not harm humans, should work exclusively for the benefit of humans. AI cannot be used to discriminate against people - for any reason, income, age, race, gender, health, and so on - because according to the Constitution of the Russian Federation, we are all equal, unless otherwise established by a court or law.
Well, there is the problem of AI transparency: the decision-making algorithms and AI policies embedded in the AI system by developers must be open and transparent.
This year, in a panel discussion at the Army 2021 forum, the idea was expressed that any interaction with AI should clearly inform the user about this and all AI decisions should be labeled "made by artificial intelligence." To understand who made this decision - a man or a machine.
Nowadays, AI decisions regarding humans tend to have a final status. For example: the system of scoring clients in the bank for some reason refused the borrower a loan, considering it insufficiently reliable.
How a citizen can get out of this situation is unclear, because the reasons for making such a decision are unclear - and now there is no way to challenge the AI's decision.
Some officials are directing us towards digitalization. But they do not fully understand that, besides all other risks, there is the following, structural one: a new shadow power will appear in the country, namely, the power of people who create and use digital systems - IT specialists, digital clerks, their bosses.
Let me give you an example: suppose city N is developing a face recognition system that should "recognize", say, 10 thousand faces for admission, for example, to important objects, and also should not recognize 5000 other, "secret" persons on the street - for example, employees special services or high-ranking officials.
Do you think a developer will resist the temptation to give his face a special status in the database - to allow himself admission anywhere or, conversely, to make himself "invisible"? And your boss?
Corruption and compromise of data will occur instantly - and it is difficult to detect it, you need a very high qualification.