Activity
Mon
Wed
Fri
Sun
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
What is this?
Less
More

Memberships

AI Leadership Executive Circle

4 members • $10,000/y

EmpowHER AI

494 members • Free

I ❤️ AI Community

1.9k members • Free

AI Sales & Lead Gen Secrets

1.4k members • Free

AI Leadership Institute

187 members • $187/m

113 contributions to I ❤️ AI Community
Last call to register for Noelle's session today!!
Asian Women Advancing AI is super excited to host Noelle at 8PM ET, today! Register below for an engaging session on "Empowering Inclusive Innovation with Ethical AI: Nurturing Creativity and Protecting Humanity." https://lu.ma/6j9gf3bw
0 likes • Aug 14
@Noelle Russell OMG!!! You were AMAZING! Thank you for speaking to our community! I learned so much last evening - especially your insights on "attention"
🌟 START HERE- Welcome to the I❤️ AI Community! 🌟
Hello New Members! Please read the entire post! You've joined a dynamic AI community, and we're thrilled to have you Next Steps: - Start Your Onboarding! - Get 2 Level 2 for Exclusive Access to AI Leadership Resources - Place Yourself on the Map! - Start liking and commenting on members posts - Participate in Events and Challenges. - Engage with our community through Discussions and Activities. We’re excited to see your introduction and have you as part of our AI journey! - Start Your Onboarding!
1 like • Aug 14
@Drashti Bhatt Welcome to the community! So glad you joined! Reach out if you have any questions!
#Asian Women Advancing AI# is hosting Noelle Russell!!
Uvika Sharma and I are thrilled to welcome the one and only 🔥Noelle Russell (Microsoft MVP, AI) 🔥as our next guest speaker for the 𝗜𝗻𝗻𝗼𝘃𝗮𝘁𝗲 & 𝗔𝗱𝘃𝗮𝗻𝗰𝗲 𝗦𝗲𝗿𝗶𝗲𝘀, where we spotlight trailblazing women in AI. This session is timely, this session is relevant & this session is exceeding important - 𝗘𝗺𝗽𝗼𝘄𝗲𝗿𝗶𝗻𝗴 𝗜𝗻𝗰𝗹𝘂𝘀𝗶𝘃𝗲 𝗜𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻 𝘄𝗶𝘁𝗵 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗔𝗜: 𝗡𝘂𝗿𝘁𝘂𝗿𝗶𝗻𝗴 𝗖𝗿𝗲𝗮𝘁𝗶𝘃𝗶𝘁𝘆 𝗮𝗻𝗱 𝗣𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗻𝗴 𝗛𝘂𝗺𝗮𝗻𝗶𝘁𝘆 ​We’re at a point in time when AI can effortlessly craft enthralling stories, compose stirring symphonies, and designs breathtaking masterpieces. However, this world is more vulnerable than ever to complications caused by deepfakes, misinformation, and privacy breaches. Generative AI holds immense potential, but with great power comes great responsibility. ​In this exhilarating session, AI trailblazer Noelle Russell, will delve into the crucial intersection of innovation and ethics, revealing how to maximize the benefits of generative AI while minimizing its risks. Noelle will discuss the latest advancements in generative AI, showcasing awe-inspiring applications that demonstrate its revolutionary impact on creativity and problem-solving. But where there's a light, there's a shadow. She will also show us the other side of the AI coin, confronting pressing concerns such as fake content, bias, and ethical dilemmas. Key takeaways for attendees will include a more comprehensive understanding of the opportunities, challenges, and risks presented by generative AI, as well as a blueprint for developing responsible AI systems, drawn from her real-life experiences supporting companies with their AI strategies. 𝗪𝗲 𝗮𝗿𝗲 𝗲𝘅𝗽𝗲𝗰𝘁𝗶𝗻𝗴 𝗮 𝗽𝗮𝗰𝗸𝗲𝗱 𝗮𝗻𝗱 𝗽𝗮𝘀𝘀𝗶𝗼𝗻𝗮𝘁𝗲 𝗮𝘂𝗱𝗶𝗲𝗻𝗰𝗲 𝘀𝗼 𝗿𝗲𝗴𝗶𝘀𝘁𝗲𝗿 𝘀𝗼𝗼𝗻! 📅 Date & Time: August 13, 2025, 8 PM EST 🔗 Register here: https://lu.ma/6j9gf3bw
1 like • Aug 7
@Kim Edwards I believe that's been resolved
The 2025 Stanford AI Index Report - Day 10 of 10
Over the past 10 days, we’ve explored some of the toughest challenges and latest breakthroughs in Responsible AI. To wrap up this series, we turn our attention to a fast-evolving and high-stakes frontier: 𝗔𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 These agents go beyond chatbots. They can plan and execute tasks on our behalf, from paying bills to controlling IoT devices. But what happens when they make mistakes? The Stanford AI Index Report introduces 𝗧𝗼𝗼𝗹𝗘𝗺𝘂, an innovative testing framework that simulates real-world agent behavior to uncover safety risks before deployment. Even top-performing models failed 23.9% of critical scenarios, triggering dangerous actions like deleting files, misdirecting payments, or compromising systems. GPT-4 itself showed a 39.4% failure rate in these simulations. More alarmingly, researchers found that a single adversarial input could cause an “infectious jailbreak,” spreading harmful behavior across an entire network of agents within 30 interactions - without further prompting. There’s currently no practical mitigation, raising serious concerns about deploying such systems at scale. While ToolEmu marks important progress in testing agent safety, much more work is needed on containment, monitoring, and accountability as these agents grow more autonomous and interconnected. 𝘉𝘰𝘵𝘵𝘰𝘮 𝘭𝘪𝘯𝘦: 𝘈𝘐 𝘢𝘨𝘦𝘯𝘵𝘴 𝘣𝘳𝘪𝘯𝘨 𝘱𝘰𝘸𝘦𝘳 𝘣𝘶𝘵 𝘢𝘭𝘴𝘰 𝘶𝘯𝘱𝘳𝘦𝘤𝘦𝘥𝘦𝘯𝘵𝘦𝘥 𝘳𝘪𝘴𝘬. 𝘛𝘰𝘰𝘭𝘌𝘮𝘶 𝘪𝘴 𝘢 𝘣𝘪𝘨 𝘴𝘵𝘦𝘱 𝘵𝘰𝘸𝘢𝘳𝘥 𝘵𝘦𝘴𝘵𝘪𝘯𝘨 𝘢𝘨𝘦𝘯𝘵 𝘴𝘢𝘧𝘦𝘵𝘺 𝘣𝘦𝘧𝘰𝘳𝘦 𝘥𝘦𝘱𝘭𝘰𝘺𝘮𝘦𝘯𝘵, 𝘣𝘶𝘵 𝘸𝘦 𝘯𝘦𝘦𝘥 𝘮𝘰𝘳𝘦 𝘸𝘰𝘳𝘬 𝘰𝘯 𝘤𝘰𝘯𝘵𝘢𝘪𝘯𝘮𝘦𝘯𝘵, 𝘮𝘰𝘯𝘪𝘵𝘰𝘳𝘪𝘯𝘨, 𝘢𝘯𝘥 𝘢𝘤𝘤𝘰𝘶𝘯𝘵𝘢𝘣𝘪𝘭𝘪𝘵𝘺 𝘢𝘴 𝘵𝘩𝘦𝘴𝘦 𝘴𝘺𝘴𝘵𝘦𝘮𝘴 𝘣𝘦𝘤𝘰𝘮𝘦 𝘮𝘰𝘳𝘦 𝘢𝘶𝘵𝘰𝘯𝘰𝘮𝘰𝘶𝘴 𝘢𝘯𝘥 𝘪𝘯𝘵𝘦𝘳𝘤𝘰𝘯𝘯𝘦𝘤𝘵𝘦𝘥. You can find the LinkedIn post here - https://www.linkedin.com/posts/padminisoni-ai_awaai-responsibleai-aiindex2025-activity-7321510878745448449-eu_A?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAFTnTcByLbOwbEevvE7zCevsNejtRqXncA
2
0
The 2025 Stanford AI Index Report - Day 9 of 10
Earlier in this series, we explored benchmarks like HELM Safety and AIR-Bench that evaluate how AI models handle harmful prompts. But today’s focus shifts to a deeper concern: what if safety mechanisms are too easy to bypass? This issue, known as 𝗦𝗵𝗮𝗹𝗹𝗼𝘄 𝗦𝗮𝗳𝗲𝘁𝘆 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 It occurs when a model appears safe on the surface but its defenses are weak. Researchers found that by simply adding a few harmless tokens, they could flip a refusal into a harmful response, boosting harmful output success from 1.5% to 87.9% with minimal fine-tuning. This shows that many safeguards only act at the start of a model’s response, leaving systems vulnerable once those are sidestepped. A promising solution is Targeted Latent Adversarial Training (LAT), which proactively strengthens hidden vulnerabilities during training. LAT reduces attack success rates across major jailbreak methods, uses 700x less compute than traditional approaches, and preserves model accuracy. It also helps erase sensitive or copyrighted data. Results show attack success rates dropping to 0-3% without sacrificing performance. 𝘛𝘩𝘦 𝘬𝘦𝘺 𝘵𝘢𝘬𝘦𝘢𝘸𝘢𝘺: 𝘴𝘶𝘱𝘦𝘳𝘧𝘪𝘤𝘪𝘢𝘭 𝘴𝘢𝘧𝘦𝘵𝘺 𝘪𝘴𝘯’𝘵 𝘦𝘯𝘰𝘶𝘨𝘩. 𝘙𝘰𝘣𝘶𝘴𝘵, 𝘳𝘦𝘴𝘪𝘭𝘪𝘦𝘯𝘵 𝘢𝘭𝘪𝘨𝘯𝘮𝘦𝘯𝘵 𝘭𝘪𝘬𝘦 𝘓𝘈𝘛 𝘸𝘪𝘭𝘭 𝘣𝘦 𝘤𝘳𝘶𝘤𝘪𝘢𝘭 𝘧𝘰𝘳 𝘵𝘩𝘦 𝘯𝘦𝘹𝘵 𝘨𝘦𝘯𝘦𝘳𝘢𝘵𝘪𝘰𝘯 𝘰𝘧 𝘙𝘦𝘴𝘱𝘰𝘯𝘴𝘪𝘣𝘭𝘦 𝘈𝘐. To read the complete post, here is the link to it on LinkedIn - https://www.linkedin.com/posts/padminisoni-ai_responsibleai-aiindex2025-aisafety-activity-7321148440174809088-dUj_?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAFTnTcByLbOwbEevvE7zCevsNejtRqXncA
2
0
1-10 of 113
Padmini Soni
5
175points to level up
@p-soni-7212
AI Ethicist and Responsible AI Evangelist

Active 2h ago
Joined Feb 22, 2024
Iowa
Powered by