— I see that interdisciplinarity runs in the family…
— Looks like it does. My brother continues to work in the same field. He is now an internationally recognised electrophysiologist who is well known by experts for his many influential contributions in the area. I used to visit my brother's lab at the Cardiology Center, and Leonid Rozenshtraukh and I would have tea. Those were relatively short meetings but they meant a lot to me. Once he gave me a piece of advice which became one of the main principles for me in research and in life. He told me, "Never look back, Maxim! You will always find excuses there to be less successful, less talented, less everything. Always look forward and aim high, try to live up to those who have accomplished more than you have".— One way or another, my entire career in science has always been around data processing and analysis, which is a part of what they often call "artificial intelligence" these days. My undergraduate project was titled “Image Processing in Modern Medical Practice”. When I was working on my PhD thesis, suddenly there was this explosion, one of many, of renewed interest in neural networks and data and image processing. This was, like, more than twenty years ago. I remember Gorban's now-popular book on neural networks was sitting on my desk. In Cambridge, we were among the first to handle dozens of terabytes of data (a gigantic volume for the early 2000s) and utilize parallel file systems. But I didn't identify myself as an AI professional at the time, because AI and big data were just regular working tools we used for our research. It's like when you're building a table using different screwdrivers, and suddenly one of them becomes extremely popular.
— With artificial intelligence, you can rarely, if ever, find a precise definition of what it is exactly.
— For a number of reasons, the topic is characterized by a high degree of ontological uncertainty. It is no longer "cool" to use the term "artificial intelligence" in professional circles. We prefer to speak of machine learning, a technology encompassing many different areas. There exist some 100 different definitions of artificial intelligence across expert communities, which complicates standardization and technical documentation in the field. This ambiguity gets in the way of any attempt to forge a regulatory framework: how do you regulate something you cannot clearly define?— The European Union tried to come up with a similar system, but last year they banned the use of artificial intelligence for the award of social credit scores.
— They did. I actually took part in those debates at various international forums (including UNESCO). Although I rarely concur with my EU colleagues on most key issues, in this case I backed their initiative to put social ranking on hold. I suppose the EU community is too multicultural for that. Working on their social credit model, their developers must have hit a point where they realized the model was getting too complicated. Too complicated to work out, is my guess.— I know you researched new molecules for a coronavirus drug at Skoltech. How is that going?
— We found a few promising candidates, and we largely owe this to the new methods developed by Petr Popov and his colleagues. But new drug development is a lengthy process. It's going to be a while before we can tell if those molecules are any good. The pandemic kick-started lots of regulatory processes, but the rule of thumb still holds in medicine: "do no harm". Remember how it was with certain, now illegal drugs? Cocaine once was sold over the counter as a cough suppressant, and opium as a cure for diarrhea before it transpired that both had a host of side effects. Such practices are unthinkable today, but the past lessons learned by the pharmaceutical industry can help us avoid many tragic errors in the field of AI. We have already let too many genies out of the bottles without due assessment of the risks involved.