User
Write something
Pinned
Interview Series: AI in Production. Paul Shaburov-founder of Glam.ai
AISA presenting Interview Series: AI in Production. Our first guest - Paul Shaburov, a 26-year-old founder and CEO of Glam AI. He has a background in computer science, specifically in natural language processing and training chatbots. Glam AI is a consumer-focused, AI-content-first social media platform. The app currently has 16 million total downloads and 2.5 million monthly active users, generating 7 million pieces of content per month. In this video: Glam AI Overview Startup Iteration and Metrics Technological Innovations Future of Generative AI Business Operations Create your Gen AI visuals together with AISA Our workshops and events calendar: https://luma.com/aistartacademy
0
0
Interview Series: AI in Production. Paul Shaburov-founder of Glam.ai
Cybersecurity. Part 1. Modern Penetration Testing
Meet Bogdan — Founder & CEO of CQR Company. Bogdan has spent 15 years in Cybersecurity professionally. He's found vulnerabilities in Instagram, investigated TikTok leaks, tracked FBI data exposures, and run red team operations for some of the biggest companies in the world. For one full day at AISA Bootcamp, he teaches non-technical founders exactly how attackers see your business — before writing a single line of code. Cybesecurity lecture starts with Modern Penetration Testing. It's a field that has remained relatively consistent in its core methodologies and tools over the past several years, despite the increasing integration of AI in broader software development and cybersecurity. Key resources and concepts discussed in the video include: Vulnerability Databases: While legacy sites like Exploit DB exist, they often lack advanced filtering and search capabilities. The speaker introduced a new, AI-enhanced platform where vulnerabilities are categorized and indexed, allowing users to search by specific vendors, systems, and years. Tool Repository: A free resource, secure.tools, has been developed to provide a categorized collection of penetration testing tools, such as those for brute force and reconnaissance. This aims to simplify the testing process without requiring financial investment. AI Integration: AI is used to process large datasets—such as 45,000 documented exploits—to provide clear descriptions, author information, and mitigation strategies. AI is also used to help interpret the extensive output generated when running security tools. Practical Example (WordPress): The speaker used WordPress as an example of an environment prone to vulnerabilities due to its reliance on third-party plugins. These plugins, often written in PHP and not thoroughly analyzed for security, can lead to vulnerabilities like Stored XSS if they are outdated or insecure. Educational Resources: The platform offers a dedicated blog covering topics like AI-driven penetration testing and SaaS security, as well as a specialized wiki focused on cybersecurity topics to help beginners build foundational knowledge.
0
0
Cybersecurity. Part 1. Modern Penetration Testing
AI Fundamentals. Part 16. The OpenClaw Defense Strategy
In this lecture Pavel Spesivtsev discusses OpenClaw, a highly popular GitHub repository, focusing on its rapid growth and the significant security risks associated with its agentic AI architecture. Overview of OpenClaw Rapid Popularity: OpenClaw gained 150,000 stars on GitHub in just 70 hours, an unprecedented rate that surpassed major projects like Linux, Google's Docker orchestration systems, and various operating systems. Functionality: While not considered revolutionary by some software engineers, it utilizes AI agents in a loop to repeatedly execute missions based on user input. Architecture: It incorporates smart and conventional choices in memory architecture that are difficult to achieve with other automation tools. Security Concerns "Security Disaster": The default setup is described as extremely dangerous because it can expose total control of a user's workstation, including microphones, cameras, files, passwords, and credit cards. The Lethal Trifecta: Pavel explains that OpenClaw's danger stems from a combination of three factors: Action Execution: The ability to execute commands and send data. Untrusted Inputs: Receiving information from sources like emails, messages, or web pages. Sensitive Information Access: Having the ability to read private files and sensitive data. Uncontrollable Risk: If all three of these aspects are present, the system becomes "totally out of control," and currently, no methodology can guarantee 100% security for this type of agentic AI. Defense Strategy Sandboxing: To mitigate these risks, the suggested strategy is to isolate the AI in a "sandbox" or "jail" environment. Limiting Access: By ensuring the AI has no access to sensitive information while it executes actions and receives inputs, its potential to cause harm is significantly limited. ━━━━━━━━━━━━━━━━━━━━━━ Want to go deeper? Join our next AI Automation Bootcamp cohort — in-person in San Francisco or online via Zoom. Next Cohort: May 11 | https://luma.com/93k9zm39
0
0
AI Fundamentals. Part 16. The OpenClaw Defense Strategy
Aloha & Welcome To Ai Start Academy's Skool Community.
AI Start Academy was born in San Francisco, right in the heart of innovation and world-changing ideas. Our mission is simple: bring the latest knowledge from top Silicon Valley minds to the world. We invite leading experts, founders, and engineers to our SF classroom for live lectures, hands-on workshops, and behind-the-scenes insights. Every lesson is captured, refined, and shared with our global community — so you can access the same cutting-edge thinking that fuels the Valley. This community exists to ignite a new wave of entrepreneurs, builders, and professionals who want to learn, create, and grow together. 🌍✨ Let’s build a bright future together
AI Fundamentals. Part 15. Final Takeaways
The key takeaways focus on decision-making for automation, understanding AI mechanics, and risk management: Automation Decision Framework: Use a simple calculation to determine if a process is worth automating by evaluating the potential ROI over 12 months. Calculate this value by multiplying your hourly rate by the number of hours saved per week. This figure establishes a budget and buffer for investing in specific automation tools. Fundamental AI Mechanics: Developing a high-level understanding of mechanics, such as tokens and attention mechanisms, provides an advantage in manipulating prompts to get better results. Risk Mitigation and Controls: It is essential to understand how AI typically fails through hallucinations and bias, as well as the security risks involved. Implementing specific controls is necessary to manage these failures and maintain security. System Architecture: Transitioning from simple prompts to complex agentic systems requires an understanding of system architecture. This knowledge ensures that AI implementations remain robust and controllable. These aren't theory — they're the practical frameworks we teach at San Francisco AI Start Academy to help non-technical professionals build real AI systems. ━━━━━━━━━━━━━━━━━━━━━━ Want to go deeper? Join our next AI Automation Bootcamp cohort — in-person in San Francisco or online via Zoom. Next Cohort: May 11 | https://luma.com/93k9zm39
0
0
AI Fundamentals. Part 15. Final Takeaways
1-19 of 19
Ai Start Academy
skool.com/aistartacademy
Ai Education for everyone from the heart of Silicon Valley
Leaderboard (30-day)
Powered by