Home / Support / Industry News

Industry News Support

RSS Feed  URL: Security

Security News

Tuesday, March 24, 2026
Monday, March 23, 2026
Saturday, March 21, 2026
  • Study: 'Security fatigue' may weaken digital defenses
    From password resets and software updates to phishing alerts and cybersecurity trainings, today's workplace is filled with constant reminders about digital security. But new research led by the University at Albany's Massry School of Business suggests those well-intentioned safeguards may be having an unintended effect.
Friday, March 20, 2026
  • Engineers devise a way to prevent manufacturing shutdowns during cyberattacks
    A professor of mechanical and aerospace engineering and a team of Rutgers students are proposing a means to defend manufacturers from cyberattacks—and ensure the uninterrupted production of mission-critical national security and infrastructure parts. Rajiv Malhotra, an associate professor in the Rutgers School of Engineering Department of Mechanical and Aerospace Engineering, proposes using a digital twin framework to improve manufacturing resilience from cyberattacks.
Wednesday, March 18, 2026
  • Study finds no single tool can fully protect online financial data
    Research in the journal Electronic Government discusses the growing need for protecting one's personal financial data as the online world faces increasingly sophisticated cyber threats. The researchers argue that no single measure is sufficient to secure the modern financial ecosystem. As such, they set out a framework that combines technological tools, regulatory oversight, and individual responsibility to combat the problem.
  • AI model trained on 14,000 Urdu news stories spots misinformation with 96% accuracy
    A deep learning model trained on more than 14,000 Pakistani news articles can spot misinformation with 96% accuracy, according to a new report in academic journal Scientific Reports. It's the most comprehensive artificial intelligence system yet for detecting fake news in Urdu, the world's 10th most spoken language with more than 170 million speakers worldwide.
  • From demons to mega behemoths: How 'monstrous' scam networks are growing
    New research led by the University of Portsmouth uncovers how scammers operate worldwide, dividing them into five "monstrous" categories. Published in the International Journal of Law, Crime and Justice, the study explores how the size of scam groups, specialized roles, and involvement of corrupt actors help scams work more effectively.
Monday, March 16, 2026
  • Grid vibrations: AI detects power supply cyberattacks in less than two seconds
    Modern energy infrastructure is increasingly defined as cyber-physical systems where physical power distribution and digital communication are closely tied together. While this digitalization boosts efficiency, it exposes electricity grids to sophisticated cybersecurity risks. To combat such threats, researchers have developed an artificial intelligence (AI) method that integrates network structure analysis with data tracking to identify complex attacks that conventional security systems might miss. Details are reported in the International Journal of Global Energy Issues.
  • Why harmful content keeps reaching children online, and what advertising has to do with it
    Children today can encounter harmful material online with alarming ease, including violent, sexual and self-harm content. While this is often treated as a moderation failure, the deeper cause is economic.
Thursday, March 12, 2026
  • AI agents can autonomously coordinate propaganda campaigns without human direction
    Imagine it is two weeks before a major election in a closely contested state. A controversial ballot measure is on the line. Suddenly, a wave of posts floods X, Reddit, and Facebook, all pushing the same narrative, all amplifying each other, all generating the appearance of a massive grassroots movement. Except none of it is real.
  • 'Privacy by design': Tech protects against identity leaking during AI photo editing
    Consumers, businesses, and institutions may soon have private, secure, and trustworthy generative AI tools for editing and sharing profile photos, ID images, and personal pictures without exposing their private identities to external platforms. Purdue University researchers Vaneet Aggarwal, Dipesh Tamboli, and Vineet Punyamoorty have developed the patent-pending system, which is utilized before and after photos are uploaded to an AI editing platform.
Tuesday, March 10, 2026
Monday, March 9, 2026
  • Can people distinguish between AI-generated and human speech?
    In a collaboration between Tianjin University and the Chinese University of Hong Kong, researchers led by Xiangbin Teng used behavioral and brain activity measures to explore whether people can discern between AI-generated and human speech. The researchers also assessed whether brief training improves this ability. This work is published in eNeuro.
  • New 'negative light' technology hides data transfers in plain sight
    Engineers at UNSW Sydney and Monash have developed an innovative way of sending hidden information that's hard to intercept. Using a phenomenon known as "negative luminescence," the system works by making signals blend perfectly into the background of natural heat radiation, such as can be seen with a thermal camera.
  • AI fake-news detectors may look accurate but fail in real use, study finds
    A dubious link from a friend. A headline too sensational to be true. A video that seems fake but you can't be sure. As online misinformation grows harder to detect, new artificial-intelligence tools promise to help us separate fact from fiction. But do they actually work?
Wednesday, March 4, 2026
  • Deepfakes, job losses, opaque models: Exploring the dark side of AI
    Artificial intelligence (AI) has become one of the defining technologies of what economists and policymakers describe as the Fourth Industrial Revolution. This is an era in which digital, physical, and biological systems are increasingly intertwined. In practical terms, AI refers to computer systems capable of performing tasks that typically require human intelligence, such as recognizing patterns, learning from data, making predictions, and assisting in complex decisions.
  • How AI could end online anonymity
    The internet is rife with anonymous accounts as users adopt pseudonyms, sometimes for genuine reasons like speaking freely, and other times for nefarious ones. But this era of online privacy could be coming to a close. In a study available on the arXiv preprint server, researchers demonstrate that large language models (LLMs) can identify the people behind these accounts at scale.
Tuesday, March 3, 2026
  • Deepfake songs are exploding, but a new tool shuts them down
    Artificial intelligence models can now clone a voice with just a few seconds of audio, fueling a surge of deepfake songs online and creating a growing crisis for musicians who don't want their voices hijacked. Beyond the obvious intellectual property rights issue, this can lead to lost revenue and take an emotional toll on artists who put their heart and soul into their songs. But researchers now have a solution.
  • From Anthropic to Iran: Who sets the limits on AI's use in war and surveillance?
    Anthropic, a leading AI company, recently refused to sign a Pentagon contract that would allow the United States military "unrestricted access" to its technology for "all lawful purposes." To sign, Anthropic CEO Dario Amodei required two clear exceptions: no mass surveillance of Americans and no fully autonomous weapons without human oversight.
  • New ensemble AI model enhances cyber intrusion detection with high accuracy
    A study published in The Journal of Engineering Research at Sultan Qaboos University presents an advanced intrusion detection system (IDS) designed to improve the accuracy and efficiency of identifying cyberattacks. The proposed model combines a double feature selection technique with a stacked ensemble machine learning approach to enhance detection performance while reducing computational complexity.
Monday, March 2, 2026
  • AI education could be crucial in tackling rising voice scams
    A new study from Abertay University reveals that the most effective way to protect people from AI voice scams is not through traditional warning messages, but by educating them about how advanced and authentic AI voices have become. Published in the Journal of Cybersecurity, the study provides one of the first psychological countermeasures against AI voices, offering a proactive approach to fraud prevention.
  • Biometric IDs are being rolled out in Africa. Study reveals the risks and pitfalls
    Across Africa, governments are introducing digital systems that use individuals' unique physical measurements to identify them. These systems collect citizens' biometric and personal data and use it to give people access to essential public services like voting, health care, education and social protection. Biometric digital identification systems are often promoted as tools to improve efficiency, inclusion and service delivery.
  • AI often escalates to nuclear action in war games
    There are some things perhaps we might not want artificial intelligence to handle, at least for the time being. When leading chatbots were put through war-game simulations, they opted for nuclear signaling or escalation in 95% of cases.
Friday, February 27, 2026
Wednesday, February 25, 2026
  • Your car's tire sensors could be used to track you
    Researchers at IMDEA Networks Institute, together with European partners, have found that tire pressure sensors in modern cars can unintentionally expose drivers to tracking. Over a ten-week study, they collected signals from more than 20,000 vehicles, revealing a hidden privacy risk and highlighting the need for stronger security measures in future vehicle sensor systems.
  • Researchers expose critical security vulnerability in autonomous drones
    University of California, Irvine computer scientists have discovered a critical security vulnerability in autonomous target-tracking drones that could have far-reaching implications for public safety, border security and personal privacy. The UC Irvine team demonstrated how attackers could use an ordinary umbrella to manipulate drones, drawing the aircraft close enough to capture them or cause them to crash.
Tuesday, February 24, 2026
  • Ensuring smartphones have not been tampered with
    With increasing cyberattacks and government data breaches, one of the most important devices to keep secure is the one in everyone's pocket: smartphones. The problem is that it is difficult to check that a smartphone has not been tampered with without the risk of unintentionally damaging the device itself.
Sunday, February 22, 2026
  • Jailbreaking the matrix: How researchers are bypassing AI guardrails to make them safer
    A paper written by University of Florida Computer & Information Science & Engineering, or CISE, Professor Sumit Kumar Jha, Ph.D., contains so many science fiction terms, you'd be forgiven for thinking it's a Hollywood script: Nullspace steering. Red teaming. Jailbreaking the matrix. But Jha's work is decidedly focused on real life, most notably strengthening the security measures built into AI tools to ensure they are safe for all to use.

   Current feed:  RSS image   or click here for current World News.

SoftRoots Industry News Support

RSS Feed  URL: Security

Security News

Tuesday, March 24, 2026
Monday, March 23, 2026
Saturday, March 21, 2026
  • Study: 'Security fatigue' may weaken digital defenses
    From password resets and software updates to phishing alerts and cybersecurity trainings, today's workplace is filled with constant reminders about digital security. But new research led by the University at Albany's Massry School of Business suggests those well-intentioned safeguards may be having an unintended effect.
Friday, March 20, 2026
  • Engineers devise a way to prevent manufacturing shutdowns during cyberattacks
    A professor of mechanical and aerospace engineering and a team of Rutgers students are proposing a means to defend manufacturers from cyberattacks—and ensure the uninterrupted production of mission-critical national security and infrastructure parts. Rajiv Malhotra, an associate professor in the Rutgers School of Engineering Department of Mechanical and Aerospace Engineering, proposes using a digital twin framework to improve manufacturing resilience from cyberattacks.
Wednesday, March 18, 2026
  • Study finds no single tool can fully protect online financial data
    Research in the journal Electronic Government discusses the growing need for protecting one's personal financial data as the online world faces increasingly sophisticated cyber threats. The researchers argue that no single measure is sufficient to secure the modern financial ecosystem. As such, they set out a framework that combines technological tools, regulatory oversight, and individual responsibility to combat the problem.
  • AI model trained on 14,000 Urdu news stories spots misinformation with 96% accuracy
    A deep learning model trained on more than 14,000 Pakistani news articles can spot misinformation with 96% accuracy, according to a new report in academic journal Scientific Reports. It's the most comprehensive artificial intelligence system yet for detecting fake news in Urdu, the world's 10th most spoken language with more than 170 million speakers worldwide.
  • From demons to mega behemoths: How 'monstrous' scam networks are growing
    New research led by the University of Portsmouth uncovers how scammers operate worldwide, dividing them into five "monstrous" categories. Published in the International Journal of Law, Crime and Justice, the study explores how the size of scam groups, specialized roles, and involvement of corrupt actors help scams work more effectively.
Monday, March 16, 2026
  • Grid vibrations: AI detects power supply cyberattacks in less than two seconds
    Modern energy infrastructure is increasingly defined as cyber-physical systems where physical power distribution and digital communication are closely tied together. While this digitalization boosts efficiency, it exposes electricity grids to sophisticated cybersecurity risks. To combat such threats, researchers have developed an artificial intelligence (AI) method that integrates network structure analysis with data tracking to identify complex attacks that conventional security systems might miss. Details are reported in the International Journal of Global Energy Issues.
  • Why harmful content keeps reaching children online, and what advertising has to do with it
    Children today can encounter harmful material online with alarming ease, including violent, sexual and self-harm content. While this is often treated as a moderation failure, the deeper cause is economic.
Thursday, March 12, 2026
  • AI agents can autonomously coordinate propaganda campaigns without human direction
    Imagine it is two weeks before a major election in a closely contested state. A controversial ballot measure is on the line. Suddenly, a wave of posts floods X, Reddit, and Facebook, all pushing the same narrative, all amplifying each other, all generating the appearance of a massive grassroots movement. Except none of it is real.
  • 'Privacy by design': Tech protects against identity leaking during AI photo editing
    Consumers, businesses, and institutions may soon have private, secure, and trustworthy generative AI tools for editing and sharing profile photos, ID images, and personal pictures without exposing their private identities to external platforms. Purdue University researchers Vaneet Aggarwal, Dipesh Tamboli, and Vineet Punyamoorty have developed the patent-pending system, which is utilized before and after photos are uploaded to an AI editing platform.
Tuesday, March 10, 2026
Monday, March 9, 2026
  • Can people distinguish between AI-generated and human speech?
    In a collaboration between Tianjin University and the Chinese University of Hong Kong, researchers led by Xiangbin Teng used behavioral and brain activity measures to explore whether people can discern between AI-generated and human speech. The researchers also assessed whether brief training improves this ability. This work is published in eNeuro.
  • New 'negative light' technology hides data transfers in plain sight
    Engineers at UNSW Sydney and Monash have developed an innovative way of sending hidden information that's hard to intercept. Using a phenomenon known as "negative luminescence," the system works by making signals blend perfectly into the background of natural heat radiation, such as can be seen with a thermal camera.
  • AI fake-news detectors may look accurate but fail in real use, study finds
    A dubious link from a friend. A headline too sensational to be true. A video that seems fake but you can't be sure. As online misinformation grows harder to detect, new artificial-intelligence tools promise to help us separate fact from fiction. But do they actually work?
Wednesday, March 4, 2026
  • Deepfakes, job losses, opaque models: Exploring the dark side of AI
    Artificial intelligence (AI) has become one of the defining technologies of what economists and policymakers describe as the Fourth Industrial Revolution. This is an era in which digital, physical, and biological systems are increasingly intertwined. In practical terms, AI refers to computer systems capable of performing tasks that typically require human intelligence, such as recognizing patterns, learning from data, making predictions, and assisting in complex decisions.
  • How AI could end online anonymity
    The internet is rife with anonymous accounts as users adopt pseudonyms, sometimes for genuine reasons like speaking freely, and other times for nefarious ones. But this era of online privacy could be coming to a close. In a study available on the arXiv preprint server, researchers demonstrate that large language models (LLMs) can identify the people behind these accounts at scale.
Tuesday, March 3, 2026
  • Deepfake songs are exploding, but a new tool shuts them down
    Artificial intelligence models can now clone a voice with just a few seconds of audio, fueling a surge of deepfake songs online and creating a growing crisis for musicians who don't want their voices hijacked. Beyond the obvious intellectual property rights issue, this can lead to lost revenue and take an emotional toll on artists who put their heart and soul into their songs. But researchers now have a solution.
  • From Anthropic to Iran: Who sets the limits on AI's use in war and surveillance?
    Anthropic, a leading AI company, recently refused to sign a Pentagon contract that would allow the United States military "unrestricted access" to its technology for "all lawful purposes." To sign, Anthropic CEO Dario Amodei required two clear exceptions: no mass surveillance of Americans and no fully autonomous weapons without human oversight.
  • New ensemble AI model enhances cyber intrusion detection with high accuracy
    A study published in The Journal of Engineering Research at Sultan Qaboos University presents an advanced intrusion detection system (IDS) designed to improve the accuracy and efficiency of identifying cyberattacks. The proposed model combines a double feature selection technique with a stacked ensemble machine learning approach to enhance detection performance while reducing computational complexity.
Monday, March 2, 2026
  • AI education could be crucial in tackling rising voice scams
    A new study from Abertay University reveals that the most effective way to protect people from AI voice scams is not through traditional warning messages, but by educating them about how advanced and authentic AI voices have become. Published in the Journal of Cybersecurity, the study provides one of the first psychological countermeasures against AI voices, offering a proactive approach to fraud prevention.
  • Biometric IDs are being rolled out in Africa. Study reveals the risks and pitfalls
    Across Africa, governments are introducing digital systems that use individuals' unique physical measurements to identify them. These systems collect citizens' biometric and personal data and use it to give people access to essential public services like voting, health care, education and social protection. Biometric digital identification systems are often promoted as tools to improve efficiency, inclusion and service delivery.
  • AI often escalates to nuclear action in war games
    There are some things perhaps we might not want artificial intelligence to handle, at least for the time being. When leading chatbots were put through war-game simulations, they opted for nuclear signaling or escalation in 95% of cases.
Friday, February 27, 2026
Wednesday, February 25, 2026
  • Your car's tire sensors could be used to track you
    Researchers at IMDEA Networks Institute, together with European partners, have found that tire pressure sensors in modern cars can unintentionally expose drivers to tracking. Over a ten-week study, they collected signals from more than 20,000 vehicles, revealing a hidden privacy risk and highlighting the need for stronger security measures in future vehicle sensor systems.
  • Researchers expose critical security vulnerability in autonomous drones
    University of California, Irvine computer scientists have discovered a critical security vulnerability in autonomous target-tracking drones that could have far-reaching implications for public safety, border security and personal privacy. The UC Irvine team demonstrated how attackers could use an ordinary umbrella to manipulate drones, drawing the aircraft close enough to capture them or cause them to crash.
Tuesday, February 24, 2026
  • Ensuring smartphones have not been tampered with
    With increasing cyberattacks and government data breaches, one of the most important devices to keep secure is the one in everyone's pocket: smartphones. The problem is that it is difficult to check that a smartphone has not been tampered with without the risk of unintentionally damaging the device itself.
Sunday, February 22, 2026
  • Jailbreaking the matrix: How researchers are bypassing AI guardrails to make them safer
    A paper written by University of Florida Computer & Information Science & Engineering, or CISE, Professor Sumit Kumar Jha, Ph.D., contains so many science fiction terms, you'd be forgiven for thinking it's a Hollywood script: Nullspace steering. Red teaming. Jailbreaking the matrix. But Jha's work is decidedly focused on real life, most notably strengthening the security measures built into AI tools to ensure they are safe for all to use.

   Current feed:  RSS image   or click here for current World News.