Project CAIRaoke: Building the assistants of the future with breakthroughs in conversational AI
If we could interact with an AI assistant in natural, conversational language, it could make our lives easier in countless ways. But assistants today are underwhelming, whether we’re interacting with them via voice or text. We developed an end-to-end neural model that can power much more personal and contextual conversations than the systems people are familiar with today.
An intro to AI, made for students
“Discover AI in Daily Life” is Google’s free, online, video-based curriculum. It’s designed with middle and high school students in mind, and dives into how AI is built, and how it helps people every day. The lesson is intended for use in a wide range of courses, not just in computer science or technology classes. You’ll find simple, non-technical explanations of how a machine can “learn” from patterns in data.
Hybrid AI Will Go Mainstream in 2022
IDC predicts the market to top $500 billion as early as 2024, with penetration across virtually all industries driving a wealth of applications and services designed to make work more effective. CB Insights Research reported that at the close of Q3 2021, funding for AI companies had already surpassed 2020 levels by roughly 55%. In 2022, we can expect AI to become better in solving practical problems that hamper unstructured language data-driven processes, thanks to improvements in complex cognitive tasks such as natural language understanding.
System Cards, a new resource for understanding how AI systems work
Meta has published a prototype AI System Card tool that is designed to provide insight into an AI system’s underlying architecture and help better explain how the AI operates. The pilot System Card is for Instagram feed ranking, which is the process of taking as-yet-unseen posts from accounts that a person follows and then ranking them based on how likely that person is to be interested in them.
Using artificial intelligence to find anomalies hiding in massive datasets
Researchers at MIT-IBM Watson AI Lab have developed a machine-learning model that can automatically pinpoint anomalies in power grid data streams in real time. They demonstrated that their artificial intelligence method is much better at detecting these glitches than some other popular techniques. The model is flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems.
Deep-learning technique predicts clinical treatment outcomes
A deep-learning technique, called G-Net, from MIT and IBM, provides a window into causal counterfactual prediction. It allows physicians to explore how a patient might fare under different treatment plans. In this way, physicians can develop alternative plans based on patient history and test them before making a decision. The foundation of G-net is the g-computation algorithm, a causal inference method.
‘Artificial Intelligence of Things,’ edge analytics create harmony at Gebhardt – Advanced Manufacturing
IoT and AI are separate technology trends that are both making waves in industry. The IoT can connect devices together, giving and receiving signals like a nervous system. In contrast, AI can act as a brain, using data to make informed decisions that control the overall system. When joined together, the two are capable of delivering intelligent, connecting systems that can self-correct and self-heal themselves.
What is Adversarial Machine Learning?
The more dependent we become on Machine Learning models, the more vulnerabilities on how to defeat these models. There are two different types of adversary attacks: whitebox and blackbox. Whitebox attack refers to when an attacker has full access to the target model. The developers have detailed knowledge of the network architecture. They know the ins and outs of the model and create an attack strategy based on loss function.
Why Neural Networks Forget, and Lessons from the Brain
Artificial neural networks struggle to learn continually and suffer from catastrophic forgetting. Each task that a neural network learns to perform has a single error function that the network is trying to minimize, irrespective of the error of any other task. For the network to change its behavior or learn how to perform a task, it must change its parameters. This is exactly what happens during training.
Improving customer decisions in web-based e-commerce through guerrilla modding
The e-commerce share of global retail sales has more than doubled in the years 2016–2020 and is expected to reach 21% in 2023. To tackle this imbalance, consumers can be given more control over product information with post-production Web customization (‘modding’) on the client-side, without depending on the entity that controls the server-side Web content.
Teaching AI to translate 100s of spoken and written languages in real time
Machine translation (MT) systems are improving rapidly, but they still rely heavily on learning from large amounts of textual data. Meta AI is announcing a long-term effort to build language and MT tools that will include most of the world’s languages. In the not too distant future, when emerging technologies like virtual and augmented reality bring the digital and physical worlds together in the metaverse, translation tools will enable people to do everyday activities.
Using Artificial Intelligence to Analyze Particles in Electron Microscopy – AZoM
Using SELPA (Scanning ELectron microscope for Particle Analysis) produced by COXEM and Oxford AZtec software makes it possible to automatically analyze large areas of fine particles utilizing an ISO 16232 standard filter. Particle analysis classifies both morphology (shape) and chemistry to classify particles, allowing the user to determine the origin of the particle as well as a possible influence on the sample.
Embracing Artificial Intelligence in Design Automation – Burns & McDonnell
Automation is playing an increasing role in infrastructure engineering projects. The technology is more accessible and cost-effective than ever for engineers to deploy. It can only be used in narrow and unchanging applications, where all possible inputs can be specifically predicted and paired with predefined output actions. Intelligent automation refers to the application of AI technology to determine the appropriate system outputs from given system inputs.
Developing drugs with the aid of artificial intelligence – Medical Xpress
Pharmaceutical scientist Xuhan Liu has developed methods that can help make drug design cheaper and faster. Liu: “We managed to improve the chemical diversity with the deep learning model that I developed” His Ph.D. defense is on 15 February at the University of Cambridge, England, where he will defend his dissertation on the subject of a new drug. He says the new methods could help make drugs cheaper and easier to design.
In MIT visit, Dropbox CEO Drew Houston ’05 explores the accelerated shift to distributed work
Dropbox co-founder Drew Houston ’05 spoke to MIT about the Covid-19 pandemic. Houston: “There’s no playbook for running a global company in a pandemic over Zoom. For a lot of it we were just taking it as we go” Houston also spoke about his $10 million gift to MIT to endow the first shared professorship between the MIT Schwarzman College of Computing and the MIT Sloan School of Management.
Brains@Bay: A Meetup on Brain-Inspired Machine Learning Algorithms
Brains@Bay is a meetup hosted by Numenta with the goal of bringing together experts and practitioners at the intersection of neuroscience and AI. The mission is to foster the study and development of machine learning algorithms heavily inspired by neuroscience empirical and theoretical research. In October 2019, we hosted our meetup in partnership with UCSC at their Silicon Valley Campus and talked about Hebbian Learning in neural networks over pizza.
The benefits of peripheral vision for machines
MIT researchers say a robust computer-vision model perceives visual representations similarly to the way humans do using peripheral vision. These models are designed to overcome subtle bits of noise that have been added to image data. Because machines do not have a visual periphery, little work on computer vision models has focused on peripheral processing, says senior author Arturo Deza. The research will be presented at the International Conference on Learning Representations.
A new age for content filters
Popular platforms such as Facebook and Google use highly tuned algorithms to provide personalized feeds of information. These algorithms usually have two goals when recommending items, namely to keep a user engaged on the website for as long as possible and present advertising content to as specific an audience as possible. Dangerously, platform algorithms contributed to the spread of misinformation during the last two US elections as well as during the current pandemic.
A Conversation on A Thousand Brains and Pedagogy: Exploring Narrative Argument with Big Ideas
In A Thousand Brains, Jeff Hawkins describes the notion of reference frames that are built and refined by the neocortex as central to learning. In this video series, we were joined by Dr. Michael Riendeau and his two students, Ranger Fair and Jacob Shalmi from Eagle Hill School. We explored how the ideas of the theory can be beneficial to educators and students, and the possible pedagogical implications of those ideas.
Accelerating fusion science through learned plasma control
Researchers have turned to simulators to help advance research into nuclear fusion. Simulators are slow and require many hours of computer time to simulate one second of real time. TCV can only sustain the plasma in a single experiment for up to three seconds, after which it needs 15 minutes to cool down and reset before the next attempt. Multiple research groups often share use of the tokamak, further limiting the time available for experiments.