Machine Learning for Lucid Dreaming

A start-up promises that lucid dreaming will soon be possible for everyone. This was reported by the German magazine Golem on November 10, 2023. The company is Prophetic by Eric Wollberg (CEO) and Wesley Louis Berry III (CTO). In a lucid dream, the dreamers are aware that they are dreaming (Image: DALL-E 3). They can shape the dream according to their will and also exit the dream. Everyone has the ability to experience lucid dreams. One can learn to induce this form of dreaming, but one can also have this form of dreaming as a child and unlearn it again as an adult. The Halo headband, a non-invasive neural device, is designed to make lucid dreaming possible. “The combination of ultrasound and machine learning models (created using EEG & fMRI data) allows us to detect when dreamers are in REM to induce and stabilize lucid dreams.” (Website Prophetic) According to Golem, the neuronal device will be available starting in 2025.

Robot Hand Can Operate in the Dark

“Researchers at Columbia Engineering have demonstrated a highly dexterous robot hand, one that combines an advanced sense of touch with motor learning algorithms in order to achieve a high level of dexterity.” (Website Columbia Engineering, 28 April 2023) Columbia Engineering reported this on its website on April 28, 2023. The text goes on to say: “As a demonstration of skill, the team chose a difficult manipulation task: executing an arbitrarily large rotation of an unevenly shaped grasped object in hand while always maintaining the object in a stable, secure hold. This is a very difficult task because it requires constant repositioning of a subset of fingers, while the other fingers have to keep the object stable. Not only was the hand able to perform this task, but it also did it without any visual feedback whatsoever, based solely on touch sensing.” (Website Columbia Engineering, 28 April 2023) “While our demonstration was on a proof-of-concept task, meant to illustrate the capabilities of the hand, we believe that this level of dexterity will open up entirely new applications for robotic manipulation in the real world”, said Matei Ciocarlie according to the website. He is the Associate Professor in the Departments of Mechanical Engineering and Computer Science who developed the hand together with his graduate student Gagan Khandate (Photo: Columbia University ROAM Lab).

Programming Machine Ethics

The book “Programming Machine Ethics” (2016) by Luís Moniz Pereira and Ari Saptawijaya is available for free download from Z-Library. Luís Moniz Pereira is among the best-known machine ethicists. “This book addresses the fundamentals of machine ethics. It discusses abilities required for ethical machine reasoning and the programming features that enable them. It connects ethics, psychological ethical processes, and machine implemented procedures. From a technical point of view, the book uses logic programming and evolutionary game theory to model and link the individual and collective moral realms. It also reports on the results of experiments performed using several model implementations. Opening specific and promising inroads into the terra incognita of machine ethics, the authors define here new tools and describe a variety of program-tested moral applications and implemented systems. In addition, they provide alternative readings paths, allowing readers to best focus on their specific interests and to explore the concepts at different levels of detail.” (Information by Springer) The download link is eu1lib.vip/book/2677910/9fd009.

Hold a Video Call Naked Without Trouble

A software developer has created a tool that retouches pants on lower body to protect from embarrassing situations. He presents his solution in his video channel Everything Is Hacked on YouTube, under the title “I made a Zoom filter to add pants when you forget to wear them”. Under the video the software developer writes: “Using Python, OpenCV, MediaPipe, and pyvirtualcam to create a Zoom (or Teams or Hangouts or whatever) video filter to blur out your lower half or add customizable pants. This should work on any platform + video call app, as well as on recordings.” (Everything Is Hacked, May 10, 2022) You can watch the video here. The code is available at https://github.com/everythingishacked/Pants

Talking with Animals

We use our natural language, facial expressions and gestures when communicating with our fellow humans. Some of our social robots also have these abilities, and so we can converse with them in the usual way. Many highly evolved animals have a language in which there are sounds and signals that have specific meanings. Some of them – like chimpanzees or gorillas – have mimic and gestural abilities comparable to ours. Britt Selvitelle and Aza Raskin, founders of the Earth Species Project, want to use machine learning to enable communication between humans and animals. Languages, they believe, can be represented not only as geometric structures, but also translated by matching their structures to each other. They say they have started working on whale and dolphin communication. Over time, the focus will broaden to include primates, corvids, and others. It would be important for the two scientists to study not only natural language but also facial expressions, gestures and other movements associated with meaning (they are well aware of this challenge). In addition, there are aspects of animal communication that are inaudible and invisible to humans that would need to be considered. Britt Selvitelle and Aza Raskin believe that translation would open up the world of animals – but it could be the other way around that they would first have to open up the world of animals in order to decode their language. However, should there be breakthroughs in this area, it would be an opportunity for animal welfare. For example, social robots, autonomous cars, wind turbines, and other machines could use animal languages alongside mechanical signals and human commands to instruct, warn and scare away dogs, elks, pigs, and birds. Machine ethics has been developing animal-friendly machines for years. Among other things, the scientists use sensors together with decision trees. Depending on the situation, braking and evasive maneuvers are initiated. Maybe one day the autonomous car will be able to avoid an accident by calling out in deer dialect: Hello deer, go back to the forest!

Welcome to the AI Opera

Blob Opera is an AI experiment by David Li in collaboration with Google Arts and Culture. According to the website, it pays tribute to and explores the original musical instrument, namely the voice. “We developed a machine learning model trained on the voices of four opera singers in order to create an engaging experiment for everyone, regardless of musical skills. Tenor, Christian Joel, bass Frederick Tong, mezzo‑soprano Joanna Gamble and soprano Olivia Doutney recorded 16 hours of singing. In the experiment you don’t hear their voices, but the machine learning model’s understanding of what opera singing sounds like, based on what it learnt from them.” (Blop Opera) You can drag the blobs up and down to change pitch – or forwards and backwards for different vowel sounds. It is not only pleasurable to hear the blobs, but also to see them. While singing, they look around and open and close their mouths. Even their tongues can be seen again and again.

A Spider that Reads the Whole Web

Diffbot, a Stanford startup, is building an AI-based spider that reads as many pages as possible on the entire public web, and extracts as many facts from those pages as it can. “Like GPT-3, Diffbot’s system learns by vacuuming up vast amounts of human-written text found online. But instead of using that data to train a language model, Diffbot turns what it reads into a series of three-part factoids that relate one thing to another: subject, verb, object.” (MIT Technology Review, 4 September 2020) Knowledge graphs – which is what this is all about – have been around for a long time. However, they have been created mostly manually or only with regard to certain areas. Some years ago, Google started using knowledge graphs too. Instead of giving us a list of links to pages about Spider-Man, the service gives us a set of facts about him drawn from its knowledge graph. But it only does this for its most popular search terms. According to MIT Technology Review, the startup wants to do it for everything. “By fully automating the construction process, Diffbot has been able to build what may be the largest knowledge graph ever.” (MIT Technology Review, 4 September 2020) Diffbot’s AI-based spider reads the web as we read it and sees the same facts that we see. Even if it does not really understand what it sees – we will be amazed at the results.

Machine Dance

Which moves go with which song? Should I do the Floss, the Dougie or the Robot? Or should I create a new style? But which one? An AI system could help answer these questions in the future. At least the announcement of a social media platform raises this hope: “Facebook AI researchers have developed a system that enables a machine to generate a dance for any input music. It’s not just imitating human dance movements; it’s creating completely original, highly creative routines. That’s because it uses finely tuned search procedures to stay synchronized and surprising, the two main criteria of a creative dance. Human evaluators say that the AI’s dances are more creative and inspiring than meaningful baselines.” (Website FB) The AI system could inspire dancers when they get stuck and help them to constantly improve. More information via about.fb.com/news/2020/08/ai-dancing-facebook-research/.

Imitating the Agile Locomotion Skills of Four-legged Animals

Imitating the agile locomotion skills of animals has been a longstanding challenge in robotics. Manually-designed controllers have been able to reproduce many complex behaviors, but building such controllers is time-consuming and difficult. According to Xue Bin Peng (Google Research and University of California, Berkeley) and his co-authors, reinforcement learning provides an interesting alternative for automating the manual effort involved in the development of controllers. In their work, they present “an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals” (Xue Bin Peng et al. 2020). They show “that by leveraging reference motion data, a single learning-based approach is able to automatically synthesize controllers for a diverse repertoire behaviors for legged robots” (Xue Bin Peng et al. 2020). By incorporating sample efficient domain adaptation techniques into the training process, their system “is able to learn adaptive policies in simulation that can then be quickly adapted for real-world deployment” (Xue Bin Peng et al. 2020). For demonstration purposes, the scientists trained “a quadruped robot to perform a variety of agile behaviors ranging from different locomotion gaits to dynamic hops and turns” (Xue Bin Peng et al. 2020).

Learning How to Behave

In October 2019 Springer VS published the “Handbuch Maschinenethik” (“Handbook Machine Ethics”) with German and English contributions. Editor is Oliver Bendel (Zurich, Switzerland). One of the articles was written by Bertram F. Malle (Brown University, Rhode Island) and Matthias Scheutz (Tufts University, Massachusetts). From the abstract: “We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence. As autonomous machines take on increasingly social roles in human communities, these machines need to have some level of moral competence to ensure safety, acceptance, and justified trust. We review the extensive and complex elements of human moral competence and ask how analogous competences could be implemented in a robot. We propose that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication). A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.” (Abstract “Handbuch Maschinenethik”). The book is available via www.springer.com.