AI should have never become a vehicle for crime, should it?

A famous actor, Stephen Fry, is outraged by the theft of his voice by a neural network.

Does the above sound like a good title? It probably does except that it is not true. This is how I would have worded it:

A famous actor, Stephen Fry, is outraged by the theft of his voice by a neural network’s operators.

The promotion of AI is being invasive bordering on aggressive. Such tactics, such behaviour is incidental to criminals: they are brazen and ruthless and they only stop before an overpowering force. Each who speaks out against the AI becomes a subject to trolling by professional trolls. I wrote about Alyona Andronova whose voice had been stolen by Tinkoff Bank, but who am I to be considered? I am an easy target for trolls. But now enter a world-wide famous actor Stephen Fry. Perhaps, now someone would listen?

The good thing is that each area of human activity that stands to cause massive and considerable damage is regulated by the government. For whatever reason, it is illegal to short the stock market, in USA; it is illegal to run pedestrians over, by one’s car; many jurisdictions have limitations on firearms ownership; sarin and phosgen are not sold at pharmacies; but inexplicably, the state does not rush to regulate the AI even though many speak out against its uncontrolled proliferation.

It is true that some opponents of AI spout total nonsense. But not all do. Some raise very valid and well-thought-through arguments against the Wild West approach to AI, which, to the shame of regulatory authorities, are also very obvious. However, I have never heard this argument yet: the AI technology has not been conceived for the purpose of being a vehicle for its criminal, fraudulent use. If IT organizations and their staff act against the members of society and in the interests of criminal element, then such activity should be primarily curtailed and only secondly studied, monitored, or corrected.

For now, AI is being enjoyed by those who urgently need a marginally fitting picture that resembles the real world. Few give it a thought… Here I could have put a period, but that was not what I intended to write: that neural networks do not conceive images all by themselves. They stockpile millions of images, process meaningful tags associated with each of them, and choose only those images that fit the task definition. The AI does not think: it only uses statistical weights, to choose the matching images and to compile out of them a desired result. Those millions of source images do not come from thin air! AI operators took them, without compensation, from artists: this is theft. Their for-profit business is founded on theft, and this proves my starting point regarding the criminal nature of the AI being promoted.

And suddenly we learn that voice neural networks steal voices. Whoa, who would have thought! Indeed, what could be the source of voices if not that of living voice actors? The AI operators could not have dreamed the whole thing up!

How about human civilization approaches the neural networks and AI the same way it approaches cars: as soon as they began to run people over and crash them registration, road signs, street lights, and corresponding legislation were introduced. More than one hundred years later, it should be cognizant of its experience in dealing with emerging technologies and deal with the AI accordingly, unlike in the Wild West: no one gives a damn until some ten thousands are killed.

Leave a Comment