248: A Public Service Announcement on Shared VPCs in AWS: Don’t!

The Cloud Pod

01-03-2024 • 1 hr 15 mins

Welcome to episode 248 of the CloudPod Podcast – where the forecast is always cloudy! It’s the return of our Cloud Journey Series! Plus, today we’re talking shared VPCs and why you should avoid them, Amazon’s new data centers ( we think they forgot about the sustainability pledge,) new threats to and from AI, and a quick preview of Next ‘24 programs – plus much more!

Titles we almost went with this week:

  • The Cloud Pod Isn’t a Basic Bitch
  • New AWS Data Solutions Framework – or – How You Accidentally Spent $100k’s
  • A PSA on Shared VPCs in AWS
  • Amazon Doesn’t Even Pay Attention to Climate When it’s on a Building
  • Vector Search I Hardly Know Her
  • Google Migs are Less Fun than Russian Migs AI Can Now Attack Us; Who Didn’t See That Coming
  • Who is Surprised That AWS is Using More Power Than the Rest of the State of Oregon
  • Spend all the Dinero in Spain

A big thanks to this week’s sponsor:

We’re sponsorless this week! Interested in sponsoring us and having access to a specialized and targeted market? We’d love to talk to you. Send us an email or hit us up on our Slack Channel.

AI is Going Great (or how ML Makes all Its Money)

01:24 Disrupting malicious uses of AI by state-affiliated threat actors

  • In this week’s chapter of AI nightmares, ChatGPT tells us how they are blocking the usage of AI by state-affiliated threat actors. Awesome; things went from bad to worse in one week. Cool. Cool cool cool.
  • In partnership with Microsoft Threat Intelligence, they have disrupted five state-affiliated actors that sought to use their AI service in support of malicious cyber activities
  • These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks.
    • Charcoal Typhoon (China affiliated) researched various companies and cybersecurity tools, debugged code and generated scripts, and created content likely for use in phishing campaigns.
    • Salmon Typhoon (China affiliated) translated technical papers, retrieved publicly available information on multiple intelligence agencies and regional threat actors, assisted with coding, and researched common ways processes could be hidden on a system.
    • Crimson Sandstorm (Iran affiliated) used OpenAI services for scripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection.
    • Emerald Sleet (North Korea affiliated) identified experts and organizations focused on defense issues in the Asia-Pacific region, to understand publicly available vulnerabilities, and used OpenAI services for help with basic scripting tasks, and drafting content that could be used in phishing campaigns.
    • Forest Blizzard (Russia-affiliated) primarily for performing research on open-source data into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.
  • OpenAI says the capabilities of the current models are limited, they believe it’s important to stay ahead of significant and evolving threats.
  • To continue making sure their platform is used for good they have a multi-pronged approach: