Have you ever had a lucid dream? You can have something close during a hypnagogic state – that period between being awake and falling asleep when reality can get sort of blurry. You might hear music as if you’re wearing headphones, or see vivid images of something you’ve been really into lately. And when an actual lucid dream occurs, it can be a wild ride. They may only last for a few seconds, but you can learn to control them and make them longer. It’s just a pretty incredible phenomenon all around.

Your Brain On Lucid Dreaming

Speaking of which, scientists in the Netherlands recently dove into the realm of lucid dreaming by looking at brain scans performed on people as they experienced them. What they found was both a decrease in beta wave activity in the right temporoparietal junction, which is associated with social cognition, as well as more effective communication between brain regions overall when compared to non-lucid dreaming.

They also noted a spike in gamma activity in the precuneus region at the moment the dreamer realized they were lucid dreaming. That region is also important regarding visual imagery and self-awareness. It’s a pretty compelling study! We can only wonder what lucid dreaming might say about consciousness. And personally, I’m hoping for some kind of dream machine someday. Forget Nintendo Switch, just plug me right in!

Robot System Learns from Watching YouTube Videos

In news that might add to your artificial intelligence despair, researchers out at Cornell University are working on a robot framework that basically involves their AI robot learning how to do things by watching YouTube videos. It’s how a lot of people learn these days – why not robots? The framework is called RHyME (Retrieval for Hybrid Imitation under Mismatched Execution). It can watch a single how-to video once and, recalling other videos it has in its dataset, learn how to do whatever task is in the video. No programming or prompting required! The examples used involve teaching a robot arm to put a cup in a sink, and turn a touch-based light off.

Human vs. Robot Half Marathon Actually Happens

Moving right along on our path to the robot apocalypse, that Human/Robot marathon did finally happen in Beijing. One of the robots showed up in a blue and chrome track suit and headphones. I’m not sure how any human can compete with that. Robots of various shapes and sizes ran in the race, from the aforementioned tiny glamorbot to more realistic humanoids. Some wore shoes, some fell down. The marathon involved 20 teams and a distance of 13 miles (21 kilometers). This might just pave the way for more human vs. robot sporting activities in the future!

Our two videos this week couldn’t be more different from one another. First, we have an acapella version of the Jurassic Park theme by MayTree, in honor of recent de-extinction attempts that certainly won’t cause us any trouble down the road. I was originally going to share that video of a scud cloud over Australia, but wouldn’t you know it, that one was about two years old and reposted by someone claiming to be from the Associated Press (they were not). I hate when that happens. The second video is of RoboCakes. They’re, well, robotic cakes. The gummy bears move!

AI Nightmares of the Week

Should you say please and thank you to your AI chatbot of choice? Do you? Is it something we should even worry about? Sam Altman, CEO of OpenAI, says doing so wastes so many resources, you have no idea. According to Futurism, Altman claims that politeness to their AI costs somewhere around tens of millions of dollars. Tokens aren’t cheap, after all, and every response uses resources to send every word. But at the same, Altman also says, “You never know.” While being kind may spare some of us from robot retribution, should things ever play out that way, being polite also sets the tone for our conversations with chatbots, and that can be helpful (or just kind of pleasant).

Until it isn’t! In another story, we have the spectre of bad actors and, as TechXplore shows, the potential to “poison” an AI model with bad data. What would happen if a previously reliable model suddenly began spitting out incorrect or even dangerous information? It already happens; just ask Google’s AI Overviews if it’s okay to eat rocks or something (though I’m sure they’ve since fixed that).

That’s all for this week’s edition of Weeklies. Hopefully you found something interesting in the batch! Check the homepage daily for new news, or catch the previous Weeklies for slightly older news! Also follow me on X for the occasional something.