Tinder has officially launched its “Photo Selector” feature, which uses AI to help users choose the best photos for their dating profiles. This was reported by TechCrunch in the article “Tinder’s AI Photo Selector automatically picks the best photos for your dating profile” by Lauren Forristal. The feature, now available to all users in the U.S. and set to roll out internationally later this summer, leverages facial detection technology. Users upload a selfie, and the AI creates a unique facial geometry to identify their face and select photos from their camera roll. The feature curates a collection of 10 selfies it believes will perform well based on Tinder’s insights on good profile images, focusing on aspects like lighting and composition. Tinder’s AI is trained on a diverse dataset to ensure inclusivity and accuracy, aligning with the company’s Diversity, Equity, and Inclusion (DEI) standards. It also filters out photos that violate guidelines, such as nudes. The goal is to save users time and reduce uncertainty when choosing profile pictures. A recent Tinder survey revealed that 68% of participants found an AI photo selection feature helpful, and 52% had trouble selecting profile images. The TechCrunch article was published on 17 July 2024 and is available at techcrunch.com/2024/07/17/tinder-ai-photo-selection-feature-launches/.
Adelina Can Write and Speak Basque
Conversational agents have been the subject of Prof. Dr. Oliver Bendel’s research for a quarter of a century. He dedicated his doctoral thesis at the University of St. Gallen from the end of 1999 to the end of 2022 to them – or more precisely to pedagogical agents, which would probably be called virtual learning companions today. He has been a professor at the FHNW School of Business since 2009. From 2012, he mainly developed chatbots and voice assistants in the context of machine ethics, including GOODBOT, LIEBOT, BESTBOT, and SPACE THEA. In 2022, the information systems specialist and philosopher of technology then turned his attention to dead and endangered languages. Under his supervision, Karim N’diaye developed the chatbot @ve for Latin and Dalil Jabou the chatbot @llegra for Vallader, an idiom of Rhaeto-Romanic, enhanced with voice output. He is currently testing the range of GPTs – “customized versions of ChatGPT”, as OpenAI calls them – for endangered languages such as Irish (Irish Gaelic), Maori, and Basque. According to ChatGPT, there is a relatively large amount of training material for them. On May 12, 2024 – after Irish Girl and Maori Girl – a first version of Adelina was created. The name is intended to commemorate the teacher Adelina Méndez de la Torre, who campaigned for bilingual teaching and the preservation of the Basque language. At first glance, the chatbot seems to have this under control. You can have the answers translated into English or German. Adelina is available in the GPT Store and will be further improved in the coming weeks.
The AAAI Spring Symposia are Back Again
On the second day of the AAAI Spring Symposia, one could already get the impression that the traditional conference has returned to its former greatness. The Covid pandemic had damaged it. In 2023, there were still too few participants for some symposia. Many stayed home and watched the sessions online. It was difficult for everyone involved. But the problems had already started in 2019. At that time, the Association for the Advancement of Artificial Intelligence had decided not to publish the proceedings centrally any more, but to leave it to the individual organizers. Some of them were negligent or disinterested and left the scientists alone with their demands. In 2024, the association took over the publication process again, which led to very positive reactions in the community. Last but not least, of course, the boost from generative AI helped. In 2024, you can see many happy and exuberant AI experts at Stanford University, with mild temperatures and lots of sunshine.
Generative AI at Stanford University
On March 26, 2024, Oliver Bendel (School of Business FHNW) gave two talks on generative AI at Stanford University. The setting was the AAAI Spring Symposia, more precisely the symposium “Impact of GenAI on Social and Individual Well-being (AAAI2024-GenAI)”. One presentation was based on the paper “How Can Generative AI Enhance the Well-being of the Blind?” by Oliver Bendel himself. It was about the GPT-4-based feature Be My AI in the Be My Eyes app. The other presentation was based on the paper “How Can GenAI Foster Well-being in Self-regulated Learning?” by Stefanie Hauske (ZHAW) and Oliver Bendel. The topic was GPTs used for self-regulated learning. Both talks were received with great interest by the audience. All papers of the AAAI Spring Symposia will be published in spring. The proceedings are edited by the Association for the Advancement of Artificial Intelligence itself.
Start of the European AI Office
The European AI Office was established in February 2024. The European Commission’s website states: “The European AI Office will be the center of AI expertise across the EU. It will play a key role in implementing the AI Act – especially for general-purpose AI – foster the development and use of trustworthy AI, and international cooperation.” (European Commission, February 22, 2024) And further: “The European AI Office will support the development and use of trustworthy AI, while protecting against AI risks. The AI Office was established within the European Commission as the center of AI expertise and forms the foundation for a single European AI governance system.” (European Commission, February 22, 2024) According to the EU, it wants to ensure that AI is safe and trustworthy. The AI Act is the world’s first comprehensive legal framework for AI that guarantees the health, safety and fundamental rights of people and provides legal certainty for companies in the 27 member states.
New Channel on Animal Law and Ethics
The new YouTube channel “GW Animal Law Program” went online at the end of November 2023. It collects lectures and recordings on animal law and ethics. Some of them are from the online event “Artificial Intelligence & Animals”, which took place on 16 September 2023. The speakers were Prof. Dr. Oliver Bendel (FHNW University of Applied Sciences Northwestern Switzerland), Yip Fai Tse (University Center for Human Values, Center for Information Technology Policy, Princeton University), and Sam Tucker (CEO VegCatalyst, AI-Powered Marketing, Melbourne). Other videos include “Tokitae, Reflections on a Life: Evolving Science & the Need for Better Laws” by Kathy Hessler, “Alternative Pathways for Challenging Corporate Humanewashing” by Brooke Dekolf, and “World Aquatic Animal Day 2023: Alternatives to the Use of Aquatic Animals” by Amy P. Wilson. In his talk, Oliver Bendel presents the basics and prototypes of animal-computer interaction and animal-machine interaction, including his own projects in the field of machine ethics. The YouTube channel can be accessed at www.youtube.com/@GWAnimalLawProgram/featured.
AAAI 2024 Spring Symposium Series
The Association for the Advancement of Artificial Intelligence (AAAI) is thrilled to host its 2024 Spring Symposium Series at Stanford University from March 25-27, 2024. With a diverse array of symposia, each hosting 40-75 participants, the event is a vibrant platform for exploring the frontiers of AI. Of the eight symposia, only three are highlighted here: Firstly, the “Bi-directionality in Human-AI Collaborative Systems” symposium promises to delve into the dynamic interactions between humans and AI, exploring how these collaborations can evolve and improve over time. Secondly, the “Impact of GenAI on Social and Individual Well-being” addresses the profound effects. of generative AI technologies on society and individual lives. Lastly, “Increasing Diversity in AI Education and Research” focuses on a crucial issue in the tech world: diversity. It aims to highlight and address the need for more inclusive approaches in AI education and research, promoting a more equitable and diverse future in the field. Each of these symposia offers unique insights and discussions, making the AAAI 2024 Spring Symposium Series a key event for those keen to stay at the cutting edge of AI development and its societal implications. More information is available at aaai.org/conference/spring-symposia/sss24/#ss01.
Machine Learning for Lucid Dreaming
A start-up promises that lucid dreaming will soon be possible for everyone. This was reported by the German magazine Golem on November 10, 2023. The company is Prophetic by Eric Wollberg (CEO) and Wesley Louis Berry III (CTO). In a lucid dream, the dreamers are aware that they are dreaming (Image: DALL-E 3). They can shape the dream according to their will and also exit the dream. Everyone has the ability to experience lucid dreams. One can learn to induce this form of dreaming, but one can also have this form of dreaming as a child and unlearn it again as an adult. The Halo headband, a non-invasive neural device, is designed to make lucid dreaming possible. “The combination of ultrasound and machine learning models (created using EEG & fMRI data) allows us to detect when dreamers are in REM to induce and stabilize lucid dreams.” (Website Prophetic) According to Golem, the neuronal device will be available starting in 2025.
Be My AI
Be My AI is a GPT-4-based extension of the Be My Eyes app. Blind users take a photo of their surroundings or an object and then receive detailed descriptions, which are spoken in a synthesized voice. They can also ask further questions about details and contexts (Image: DALL-E 3). Be My AI can be used in a variety of situations, including reading labels, translating text, setting up appliances, organizing clothing, and understanding the beauty of a landscape. It also offers written responses in 29 languages, making it accessible to a wider audience. While the app has its advantages, it’s not a replacement for essential mobility aids such as white canes or guide dogs. Users are encouraged to provide feedback to help improve the app as it continues to evolve. The app will become even more powerful when it starts to analyze videos instead of photos. This will allow the blind person to move through his or her environment and receive constant descriptions and assessments of moving objects and changing situations. More information is available at www.bemyeyes.com/blog/announcing-be-my-ai.
All that Groks is God
Elon Musk has named his new language model Grok. The word comes from the science fiction novel “Stranger in a Strange Land” (1961) by Robert A. Heinlein. This famous novel features two characters who have studied the word. Valentine Michael Smith (aka Michael Smith or “Mike”, the “Man from Mars”) is the main character. He is a human who was born on Mars. Dr “Stinky” Mahmoud is a semanticist. After Mike, he is the second person who speaks the Martian language but does not “grok” it. In one passage, Mahmoud explains to Mike: “‘Grok’ means ‘identically equal.’ The human cliché. ‘This hurts me worse than it does you’ has a Martian flavor. The Martians seem to know instinctively what we learned painfully from modern physics, that observer interacts with observed through the process of observation. ‘Grok’ means to understand so thoroughly that the observer becomes a part of the observed – to merge, blend, intermarry, lose identity in group experience. It means almost everything that we mean by religion, philosophy, and science – and it means as little to us as color means to a blind man.” Mike says a little later in the dialog: “God groks.” In another place, there is a similar statement: “… all that groks is God …”. In a way, this fits in with what is written on the website of Elon Musk’s AI start-up: “The goal of xAI is to understand the true nature of the universe.” The only question is whether this goal will remain science fiction or become reality.