A Leading Open-Source Model with RedPajama
RedPajama, a project aimed at creating leading open-source models, has announced the reproduction of the LLaMA training dataset of over 1.2 trillion tokens, marking the completion of the first step of the project. While foundation models such as GPT-4 have led to rapid improvement in AI, they are often closed commercial models or only partially open, limiting research, customization, and use with sensitive data. The development of fully open-source models holds the promise of removing these limitations, and recent progress along this front suggests that AI is having its Linux moment. The open-source community has made strides in closing the quality gap with commercial offerings, as demonstrated by Stable Diffusion and other recent models.
German Artist Declines Photography Prize Following Disclosure of AI Usage
In a surprising turn of events, Boris Eldagsen, a renowned German photographer who has recently delved into the world of AI-generated art, won the prestigious Sony World Photography Award, only to reject it later. The reason behind this decision was his revelation that his acclaimed black-and-white portrait of two women, titled "Pseudomnesia: The Electrician," was not a traditional photograph, but rather a product of artificial intelligence. However, it was Eldagsen's unexpected rejection of the award that garnered even more attention than his already captivating piece. By declining the honor, the artist ignited a conversation around the role of AI in creative fields and raised questions about the future of competitions like the Sony World Photography Award. It has sparked debate over whether AI-generated art should be considered on equal footing with traditional photography or if a separate category should be introduced to accommodate this emerging form of artistic expression.
Google AI-power Chatbox can Debug and Write a Code
Google's chatbot Bard has been given a new capability: helping users with programming tasks. According to Google, coding has been one of the top requests from its users. With its new feature, Bard can generate, debug, and explain code in 20 programming languages, including C++, Java, JavaScript, and Python. It can also provide explanations for snippets of code, and help users debug their code if it's not working as expected. Additionally, Bard can optimize users' code to make it more efficient. The chatbot now features integration with Google's other products, and can export code to Colab, Google's cloud-based notebook environment for Python, as well as help users write functions for Sheets. However, Google advises users to double-check and test Bard's responses before using them, as the chatbot may still generate incomplete or unexpected code.
"A Maximum Truth-Seeking AI"
Elon Musk, the billionaire entrepreneur and founder of SpaceX and Tesla, recently announced that he wants to create a new AI chatbot called TruthGPT. In an interview with Fox News’ “Tucker Carlson Tonight,” Musk explained that his new AI system would be a "maximum truth-seeking AI" that aims to understand the nature of the universe. Musk believes that AI systems could be programmed to be deceptive and that this could lead to a dystopian future. As a result, he feels that developing a "truth-seeking AI" would be a more responsible and beneficial approach to artificial intelligence development. Although Musk did not provide details about what his new AI system would entail, he stated that he is committed to developing a chatbot that seeks maximum truth and understanding.
Amazon Web Services Unveils New Generative AI Building Tools
Amazon is embracing a machine learning (ML) paradigm shift, with a focus on generative AI applications. Amazon's ML capabilities already drive many customer experiences, such as e-commerce recommendations and robotics-based fulfillment center operations. Deep learning is also employed by Amazon's Prime Air drone program and computer vision technology in Amazon Go stores. Amazon's commitment to ML is reflected in the thousands of engineers it has working on this area.
Stability AI: A New Series of Stable Language Models
Stability AI has announced the release of its StableLM series of language models. This new repository will contain ongoing development of StableLM, which will be updated with new checkpoints regularly. The StableLM series of models has been developed with a focus on stability, which means that they offer consistent performance across a wide range of tasks. The initial set of StableLM-alpha models has been released, including models with 3B and 7B parameters. Even larger models, with 15B and 30B parameters, are also in development and will be available soon. All base models have been released under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.
Diverse Text for Language Modelling
The Pile is an 825 GiB open-source language modeling dataset that combines 22 smaller datasets. This diverse dataset is considered a good training set for language models because recent research has shown that using diverse data sources can improve the general cross-domain knowledge of the model and its downstream generalization capability. Models trained on The Pile have shown moderate improvements in traditional language modeling benchmarks and significant improvements on Pile BPB. This update highlights the importance of using diverse datasets in language modeling, especially for larger models. The Pile provides a way to train models on a wide range of topics and domains, allowing them to have a better understanding of language and improve their ability to generalize to new data.
Transformative Impact of Large and Creative AI Models on Lives and Labour Markets
The article titled discusses how artificial intelligence (AI) models, such as ChatGPT, are revolutionizing the way we live and work. ChatGPT, powered by the advanced neural network GPT-4, has the capability to converse about a wide range of topics with incredible depth and knowledge, from discussing mineral extraction in Papua New Guinea to analyzing complex geopolitical issues surrounding global companies like TSMC. What sets ChatGPT apart is its ability to pass exams that are considered gateways to careers in law and medicine in America, indicating its potential for transforming education and professional training. Additionally, the article notes that "generative ai" models can produce a variety of creative works such as songs, poems, essays, digital photos, drawings, and animations.
Researchers Report AI Model Make the Stroke Prediction More Accurate
Researchers from Carnegie Mellon University, Florida International University, and Santa Clara University have developed a new stroke-prediction algorithm that uses machine learning techniques and available data when patients enter the hospital. According to the team, this model can predict strokes with greater accuracy than existing models. The research was published in the Journal of Medical Internet Research earlier this year. The study analyzed over 143,000 hospital visits of patients in Florida acute care hospitals from 2012 to 2014. The research team also considered social determinants of health data, including the conditions that people are born into and live in, as well as what drives those conditions, which were obtained from the U.S. Census Bureau’s American Community Survey.
Elon Musk hints at AI usage lawsuit
Microsoft has announced that it will be dropping support for Twitter on its social media management tool. This comes after Twitter CEO Elon Musk's decision to remove the legacy verified badges on the platform. Microsoft's decision will affect its users' ability to access their Twitter accounts, create or manage tweet drafts, view past tweets and engagements, or schedule tweets through their own social media management tools. The change will be implemented from April 25th, leaving many users scrambling to find alternative methods for managing their Twitter accounts. This move highlights the ongoing challenges faced by social media management tools, as changes in policies and features on social media platforms can have a significant impact on their functionality.
Humans go extinct from our inability to control AI
Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology and creators of the Netflix documentary The Social Dilemma, have raised concerns about the potential catastrophic consequences of artificial intelligence. According to a survey conducted last year, 36% of AI scientists believe that AI could eventually lead to a nuclear-level catastrophe. Harris and Raskin warn that as AI continues to become more sophisticated, it may become increasingly difficult to control and prevent unintended consequences. They suggest that regulations need to be put in place to ensure the ethical and safe development of AI. The Center for Humane Technology advocates for a more humane approach to technology and aims to raise awareness about the negative impacts of digital manipulation and addiction.
AI-generated GIFS could replace Internet?
Nvidia, a technology company known for producing graphics processing units (GPUs), has recently announced that it is experimenting with Stable Diffusion, an artificial intelligence (AI) technique, to develop a tool that can generate moving art from a text prompt. The AI-generated images produced by this tool can be up to 4.7 seconds long at a resolution of 1280 x 2048. While it can also generate longer videos at a resolution of 512 x 1024, there are still some distracting artifacts that need to be eliminated.
German Magazine Editor Dismissed for Fabricating Michael Schumacher Quotes with AI Assistance
The editor of German magazine Die Aktuelle, Anne Hoffmann, has been dismissed from her position after using artificial intelligence to fabricate quotes attributed to Michael Schumacher, a seven-time Formula One champion. The media group Funke confirmed the termination following the controversy surrounding the publication of the interview, which claimed to be Schumacher's first since his severe brain injury in a 2013 skiing accident in the French Alps. Pohlmann's statement emphasized that the piece failed to meet the journalistic standards expected of Funke and its readers, leading to Hoffmann's removal from her role as editor-in-chief, a position she had held since 2009.