We look at data that we get from different sources, giving it
Posted: Tue Jan 07, 2025 5:54 am
2020 was a prolific year for cyberattacks. 53% of Spanish companies reported having been the victim of at least one attack on their IT system. This is according to the fifth edition of the Hiscox 'Cyber Preparedness Report'.
Moreover, these cyberattacks are becoming more serious . Before the end of the year, the CNI had detected 6,690 highly dangerous incidents. Meanwhile, the National Cryptologic Centre (CCN) found 73,184 total cyberthreats throughout the year. This represents an increase of 70% compared to the previous year.
This has fueled fears of a machine rebellion and robots becoming hackers, a reality that experts do not rule out. At some point, it may be non-human hackers who hack our systems.
The dark side of big data could lead us to this dystopian reality, as warned by experts such as Josep Curto , professor of Computer Science, Multimedia and Telecommunications at the Universitat Oberta de Catalunya ( UOC ).
"It seeks to automate tasks of all kinds, from the allocation of credit to the selection of personnel. And the use of algorithms is not free of errors, bugs, omissions, specific purposes and biases of all kinds. This can cause social, economic and political problems," he says, referring to current artificial intelligence based on big data.
How can artificial intelligence become hackers?
But how can they cause cyberattacks? According to Jordi Serra , also a professor at the Faculty of Informatics, Multimedia and Telecommunications, this is how algorithms work:
"It goes beyond what a person might think. They can be programmed to do, for example, a specific classification and then end up doing it in a different way , although the result may be identical to what was thought."
context and understanding it. What artificial intelligence systems do is look for relationships between that data, beyond what it may mean. “Because they are faster, they can find relationships between data that humans don’t think of because they have that prior knowledge that makes us think that maybe two pieces of data are not related,” he explains.
This means that while humans have a predetermined way of thinking based on experience israel number data and prior knowledge, machines do not. "We do not have control over machines ," Serra adds.
How to avoid cyber attacks by robots
AI systems follow mechanisms to detect patterns and find optimal solutions . This can lead to cases known as 'reward hacking' . This means having the objective of achieving the greatest optimization in the problem they solve, but without taking into account the context, regulation or ethics.
"Identifying these problems involves analysing the models generated through interpretability before and during their implementation. In other words, it involves monitoring the model , how it makes decisions, what biases affect it, how its performance evolves... In short, it involves introducing AI governance," says Curto.
However, there are some difficulties in doing so. For example, AI is, in Serra's opinion, increasingly opaque . Experts are debating whether too much power is being given to AI-based technology or whether security protection in this field is being undervalued.
"Many organizations are not clear about how AI systems work, how to govern them properly, or even their social impact, so adequate protection measures are not taken into account throughout the system cycle," says Curto.
“As we digitize all business processes, many companies will bet on AI to extract value from these assets. But without ethical principles, identification of biases, understanding of how algorithms work, the limits of how they work… it will be natural to fall into scenarios such as those mentioned, whether by omission or with premeditation and malice aforethought,” warns Josep Curto.
Moreover, these cyberattacks are becoming more serious . Before the end of the year, the CNI had detected 6,690 highly dangerous incidents. Meanwhile, the National Cryptologic Centre (CCN) found 73,184 total cyberthreats throughout the year. This represents an increase of 70% compared to the previous year.
This has fueled fears of a machine rebellion and robots becoming hackers, a reality that experts do not rule out. At some point, it may be non-human hackers who hack our systems.
The dark side of big data could lead us to this dystopian reality, as warned by experts such as Josep Curto , professor of Computer Science, Multimedia and Telecommunications at the Universitat Oberta de Catalunya ( UOC ).
"It seeks to automate tasks of all kinds, from the allocation of credit to the selection of personnel. And the use of algorithms is not free of errors, bugs, omissions, specific purposes and biases of all kinds. This can cause social, economic and political problems," he says, referring to current artificial intelligence based on big data.
How can artificial intelligence become hackers?
But how can they cause cyberattacks? According to Jordi Serra , also a professor at the Faculty of Informatics, Multimedia and Telecommunications, this is how algorithms work:
"It goes beyond what a person might think. They can be programmed to do, for example, a specific classification and then end up doing it in a different way , although the result may be identical to what was thought."
context and understanding it. What artificial intelligence systems do is look for relationships between that data, beyond what it may mean. “Because they are faster, they can find relationships between data that humans don’t think of because they have that prior knowledge that makes us think that maybe two pieces of data are not related,” he explains.
This means that while humans have a predetermined way of thinking based on experience israel number data and prior knowledge, machines do not. "We do not have control over machines ," Serra adds.
How to avoid cyber attacks by robots
AI systems follow mechanisms to detect patterns and find optimal solutions . This can lead to cases known as 'reward hacking' . This means having the objective of achieving the greatest optimization in the problem they solve, but without taking into account the context, regulation or ethics.
"Identifying these problems involves analysing the models generated through interpretability before and during their implementation. In other words, it involves monitoring the model , how it makes decisions, what biases affect it, how its performance evolves... In short, it involves introducing AI governance," says Curto.
However, there are some difficulties in doing so. For example, AI is, in Serra's opinion, increasingly opaque . Experts are debating whether too much power is being given to AI-based technology or whether security protection in this field is being undervalued.
"Many organizations are not clear about how AI systems work, how to govern them properly, or even their social impact, so adequate protection measures are not taken into account throughout the system cycle," says Curto.
“As we digitize all business processes, many companies will bet on AI to extract value from these assets. But without ethical principles, identification of biases, understanding of how algorithms work, the limits of how they work… it will be natural to fall into scenarios such as those mentioned, whether by omission or with premeditation and malice aforethought,” warns Josep Curto.