SoftRoots Industry News Support
Security News
Friday, January 23, 2026
- The next generation of disinformation: AI swarms can threaten democracy by manufacturing fake public consensus
An international research team involving Konstanz scientist David Garcia warns that the next generation of influence operations may not look like obvious "copy-paste bots," but like coordinated communities: fleets of AI-driven personas that can adapt in real time, infiltrate groups, and manufacture the appearance of public agreement at scale.
- Stress-testing AI vision systems: Rethinking how adversarial images are generated
Deep neural networks (DNNs) have become a cornerstone of modern AI technology, driving a thriving field of research in image-related tasks. These systems have found applications in medical diagnosis, automated data processing, computer vision, and various forms of industrial automation, to name a few.
Thursday, January 22, 2026
- Hacking the grid: How digital sabotage turns infrastructure into a weapon
The darkness that swept over the Venezuelan capital in the predawn hours of Jan. 3, 2026, signaled a profound shift in the nature of modern conflict: the convergence of physical and cyber warfare. While U.S. special operations forces carried out the dramatic seizure of Venezuelan President Nicolás Maduro, a far quieter but equally devastating offensive was taking place in the unseen digital networks that help operate Caracas.
Wednesday, January 21, 2026
- Misleading text in the physical world can hijack AI-enabled robots, cybersecurity study shows
As a self-driving car cruises down a street, it uses cameras and sensors to perceive its environment, taking in information on pedestrians, traffic lights, and street signs. Artificial intelligence (AI) then processes that visual information so the car can navigate safely.
- How do we make sure AI is fair, safe, and secure?
AI is ubiquitous now—from interpreting medical results to driving cars, not to mention answering every question under the sun as we search for information online. But how do we know it is safe to use, and that it's not generating answers from thin air?
Tuesday, January 20, 2026
- Research reveals a surprising line of defense against cyber attacks: Accountants
When Optus, Medibank and non-bank lender Latitude Financial were hit by separate cyber attacks in the past few years, millions of Australians felt the fallout: stolen personal data, disrupted services and weeks of uncertainty. Each breach raised the same uncomfortable question: how can this keep happening?
- The sky is full of secrets: Glaring vulnerabilities discovered in satellite communications
With $800 of off‐the‐shelf equipment and months' worth of patience, a team of U.S. computer scientists set out to find out how well geostationary satellite communications are encrypted. And what they found was shocking.
Monday, January 19, 2026
- Ransomware: What it is and why it's your problem
Ransomware is a type of malicious software that makes a victim's data, system or device inaccessible. It locks the target or encrypts it (converting text into an unreadable form) until the victim pays a ransom to the attacker.
- Cyberattacks can trigger societal crises, scientists warn
Cyberattacks can wreak havoc on the systems they target, yet their impact often spreads far beyond technical failures, potentially triggering crises that engulf entire communities, a new study argues.
- 4 in 5 small businesses had cyberscams in 2025 and almost half of attacks were AI powered
One more reason things cost more today: cybercrime.
Thursday, January 15, 2026
- Forensic system cuts IoT attack analysis time by three-quarters
A new forensic framework designed specifically for the Internet of Things (IoT) is discussed in the International Journal of Electronic Security and Digital Forensics. This deep learning-driven system offers benefits over earlier approaches in detecting and reconstructing cyberattacks on components of the vast network of connected sensors, appliances and machines. It achieves an accuracy of almost 98%, according to the researchers, and cuts analysis time by more than three-quarters.
- AIs behaving badly: An AI trained to deliberately make bad code will become bad at unrelated tasks, too
Artificial intelligence models that are trained to behave badly on a narrow task may generalize this behavior across unrelated tasks, such as offering malicious advice, suggests a new study. The research probes the mechanisms that cause this misaligned behavior, but further work must be done to find out why it happens and how to prevent it.
- One Tech Tip: Californians have a new privacy tool for deleting their data
New year, new privacy rules. At least for Californians.
Wednesday, January 14, 2026
- New legal framework clarifies liability for AI-generated child abuse images
A short, seemingly harmless command is all it takes to use Elon Musk's chatbot Grok to turn public photos into revealing images—without the consent of the people depicted. For weeks, users have been flooding the platform X with such deepfakes, some of which show minors.
- Your voice gives away valuable personal information—expert raises privacy concerns
You can probably quickly tell from a friend's tone of voice whether they're feeling happy or sad, energetic or exhausted. Computers can already do a similar analysis, and soon they'll be able to extract a lot more information. It's something we should all be concerned about, according to Associate Professor in Speech and Language Technology, Tom Bäckström. Personal information encoded in your voice could lead to increased insurance premiums or to advertising that exploits your emotional state. Private information could also be used for harassment, stalking or even extortion.
Tuesday, January 13, 2026
- What can technology do to stop AI-generated sexualized images?
The global outcry over the sexualization and nudification of photographs—including of children—by Grok, the chatbot developed by Elon Musk's artificial intelligence company xAI, has led to urgent discussions about how such technology should be more strictly regulated.
- 'Rosetta stone' for database inputs reveals serious security issue
The data inputs that enable modern search and recommendation systems were thought to be secure, but an algorithm developed by Cornell Tech researchers successfully teased out names, medical diagnoses and financial information from encoded datasets.
Saturday, January 10, 2026
- Danish chemist's invention could make counterfeiting a thing of the past
Every year, companies lose revenue when goods are copied or illegally resold. Now, a new digital and legally binding fingerprint developed at the University of Copenhagen makes products impossible to counterfeit. Royal Copenhagen is among the first brands in the world to use the solution.
Wednesday, January 7, 2026
- What does cybersecurity look like in the quantum age?
Quantum computers promise unprecedented computing speed and power that will advance both business and science. These same qualities also make them a prime target for malicious hackers, according to Swaroop Ghosh, professor of computer science and of electrical engineering at the Penn State School of Electrical Engineering and Computer Science.
Tuesday, January 6, 2026
- Patient privacy in the age of clinical AI: Scientists investigate memorization risk
What is patient privacy for? The Hippocratic Oath, thought to be one of the earliest and most widely known medical ethics texts in the world, reads: "Whatever I see or hear in the lives of my patients, whether in connection with my professional practice or not, which ought not to be spoken of outside, I will keep secret, as considering all such things to be private."
- N. Zealand health hackers seek cash and 'good reputation'
Hackers claiming to have accessed more than 100,000 people's health records in New Zealand have reportedly extended a ransom deadline until Friday, after saying they want to build a "good reputation."
Wednesday, December 31, 2025
- How California's Delete Act will protect personal information from data brokers in the New Year
Use a loyalty card at a drug store, browse the web, post on social media, get married or do anything else most people do, and chances are companies called data brokers know about it—along with your email address, your phone number, where you live and virtually everywhere you go.
Monday, December 29, 2025
- Deepfakes leveled up in 2025—here's what's coming next
Over the course of 2025, deepfakes improved dramatically. AI-generated faces, voices and full-body performances that mimic real people increased in quality far beyond what even many experts expected would be the case just a few years ago. They were also increasingly used to deceive people.
Monday, December 22, 2025
- Spotify says piracy activists hacked its music catalogue
Music streaming service Spotify said Monday it had disabled accounts from a piracy activist hacker group that claimed to have "backed up" millions of Spotify's music files and metadata.
Thursday, December 18, 2025
- American Airlines testing new boarding technology at DFW Airport
Imagine a future where you board an American Airlines flight without a gate agent scanning a boarding pass.
- 'Personality test' shows how AI chatbots mimic human traits—and how they can be manipulated
Researchers have developed the first scientifically validated "personality test" framework for popular AI chatbots, and have shown that chatbots not only mimic human personality traits, but their "personality" can be reliably tested and precisely shaped—raising implications for AI safety and ethics.
Wednesday, December 17, 2025
- AI system protects wireless networks from jamming attacks in real time
A research team at the University of Ottawa has developed an advanced artificial intelligence system designed to autonomously defend wireless networks from jamming attacks, operating much like a digital immune system. This technology can automatically detect and respond to jamming in real time, which could play a critical role in securing Canada's communications infrastructure.
- The spoofing problem: Why tech platforms' age verification may not protect minors
As platforms rush to verify users' ages, experts warn consumer-grade cameras lack the technology to reliably authenticate minors.
Monday, December 15, 2025
- AI chatbot to help cybersecurity teams protect infrastructure
Experts led by Professor Carsten Maple at the University of Warwick's Cyber Security Center, have developed a new tool, called ICSThreatQA, to tackle the problem of cybersecurity breaches.
Sunday, December 14, 2025
- Tech savvy users have most digital concerns, study finds
Digital concerns around privacy, online misinformation, and work-life boundaries are highest among highly educated, Western European millennials, finds a new study from researchers at UCL and the University of British Columbia.
Current feed: