Category Archives: Internet

The hottest subjects on campus

On an afternoon in early April, Tommi Jaakkola is pacing at the front of the vast auditorium that is 26-100. The chalkboards behind him are covered with equations. Jaakkola looks relaxed in a short-sleeved black shirt and jeans, and gestures to the board. “What is the answer here?” he asks the 500 MIT students before him. “If you answer, you get a chocolate. If nobody answers, I get one — because I knew the answer and you didn’t.” The room erupts in laugher.

With similar flair but a tighter focus on the first few rows of seats, Regina Barzilay had held the room the week prior. She paused often to ask: “Does this make sense?” If silence ensued, she warmly met the eyes of the students and reassured them: “It’s okay. It will come.” Barzilay acts as though she is teaching a small seminar rather than a stadium-sized class requiring four instructors, 15 teaching assistants, and, on occasion, an overflow room.

Welcome to “Introduction to Machine Learning,” a course in understanding how to give computers the ability to learn things without being explicitly programmed to do so. The popularity of 6.036, as it is also known, grew steadily after it was first offered, from 138 in 2013 to 302 students in 2016. This year 700 students registered for the course — so many that professors had to find ways to winnow the class down to about 500, a size that could fit in one of MIT’s largest lecture halls.

Jaakkola, the Thomas Siebel Professor in the Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society, and Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science, have led 6.036 since its inception. They provide students from varied departments with the necessary tools to apply machine learning in the real world — and they do so, according to students, in a manner that is remarkably engaging.

Greg Young, an MIT senior and electrical engineering and computer science major, says the orchestration of the class, which is co-taught by Wojciech Matusik and Pablo Parrilo from the Department of Electrical Engineering and Computer Science (EECS), is impressive. This is all the more so because the trendiness of machine learning (and, consequently, the class enrollment), in his opinion, is nearly out of hand.

“I think people are going where they think the next big thing is,” Young says. Waving an arm to indicate the hundreds of students lined up in desks below him, he says: “The professors certainly do a good job keeping us engaged, considering the size of this class.”

Indeed, the popularity of 6.036 is such that a version for graduate students — 6.862 (Applied Machine Learning) — was folded into it last spring. These students take 6.036 and do an additional semester-long project that involves applying machine learning methods to a problem in their own research.

“Nowadays machine learning is used almost everywhere to make sense of data,” says faculty lead, Stefanie Jegelka, the X-Window Consortium Career Development Assistant Professor in EECS. She says her students come from MIT’s schools of engineering, architecture, science, management, and elsewhere. Only one-third of graduate students seeking to take the spinoff secured seats this semester.

How they learn

The success of 6.036, according to its faculty designers, has to do with its balanced delivery of theoretical content and programming experience — all in enough depth to prove challenging but graspable, and, above all, useful. “Our students want to learn to think like an applied machine-learning person,” says Jaakkola, who launched the pilot course with Barzilay. “We try to expose the material in a way that enables students with very minimal background to sort of get the gist of how things work and why they work.”

Once the domain of science fiction and movies, machine learning has become an integral part of our lived experience. From our expectations as consumers (think of those Netflix and Amazon recommendations), to how we interact with social media (those ads on Facebook are no accident), to how we acquire any kind of information (“Alexa, what is the Laplace transform?”), machine learning algorithms operate, in the simplest sense, by converting large collections of knowledge and information into predictions that are relevant to individual needs.

As a discipline, then, machine learning is the attempt to design and build computer programs that learn from experience for the purpose of prediction or control. In 6.036, students study principles and algorithms for turning training data into effective automated predictions. “The course provides an excellent survey of techniques,” says EECS graduate student Helen Zhou, a 6.036 teaching assistant. “It helps build a foundation for understanding what all those buzzwords in the tech industry mean.”

Develop technology based tools to address racism and bias

In July 2016, feeling frustrated about violence in the news and continued social and economic roadblocks to progress for minorities, members of the Black Alumni of MIT (BAMIT) were galvanized by a letter to the MIT community from President L. Rafael Reif. Responding to a recent series of tragic shootings, he asked “What are we to do?”

BAMIT members gathered in Washington to brainstorm a response, and out of that session emerged a plan to organize a hackathon aimed at finding technology-based solutions to address discrimination. The event, held at MIT last month, was called “Hacking Discrimination” and spearheaded by Elaine Harris ’78 and Lisa Egbuonu-Davis ’79 in partnership with the MIT Alumni Association.

The 11 pitches presented during the two-day hackathon covered a wide range of issues affecting communities of color, including making routine traffic stops less harmful for motorists and police officers, preventing bias in the hiring process by creating a professional profile using a secure blockchain system, flagging unconscious biases using haptic (touch-based) feedback and augmented reality, and providing advice for those who experience discrimination.

Hackathon winners were selected in three categories – Innovation, Impact, and Storytelling – and received gifts valued at $1,500. The teams also received advice from local experts on their topics throughout the second day of hacking.

The Innovation prize was awarded to Taste Voyager, a platform that enables individuals or families to host guests and foster cultural understanding over a home-cooked meal. The Impact prize went to Rahi, a smartphone app that makes shopping easier for recipients of the federally funded Women, Infant, and Children food-assistance program. The Storytelling prize was awarded to Just-Us and Health, which uses surveys to track the effects of discrimination in neighborhoods.

As Randal Pinkett SM ’98, MBA ’98, PhD ’02 said in his keynote speech, “Technology alone won’t solve bias in the U.S.,” and the hackathon made sure to focus on technology’s human users. Under the guidance of Fahad Punjwani, an MIT graduate student in integrated design and management, the event’s mentors ensured that participants considered not just how to deploy their technologies but also the people they aimed to serve.

With a human-centered design process as the guideline, Punjwani encouraged participants to speak with people affected by the problem and carefully define their target audience. For some, including the Taste Voyager team, which began the hackathon as Immigrant Integration, this resulted in an overhaul of the project. Examining their target audience led the team to switch their focus from helping immigrants integrate to creating a way for people of different backgrounds to connect and help each other in a safe space.

“We hacked the topic of our topic,” said Jennifer Williams of the Lincoln Laboratory’s Human Language Technology group, who led the team.

The Rahi team, which was led by Hildreth England, assistant director of the Media Lab’s Open Agriculture Initiative, also focused on the user as it attempted to improve the national Women, Infants, and Children (WIC) nutrition program by acknowledging the racial and ethnic inequalities embedded in the food system. For example, according to Feeding America, one in five African-American and Latino households is food insecure — lacking consistent and adequate access to affordable and nutritious food — compared to one in 10 Caucasian households.

Academic success despite an inauspicious start

When Armando Solar-Lezama was a third grader in Mexico City, his science class did a unit on electrical circuits. The students were divided into teams of three, and each team member had to bring in a light bulb, a battery, or a switch.

Solar-Lezama, whose father worked for an electronics company, volunteered to provide the switch. Using electrical components his father had brought home from work, Solar-Lezama built a “flip-flop” circuit and attached it to a touch-sensitive field effect transistor. When the circuit was off, touching the transistor turned it on, and when it was on, touching the transistor turned it off. “I was pretty proud of my circuit,” says Solar-Lezama, now an MIT professor of electrical engineering and computer science.

By the time he got to school, however, one of his soldered connections had come loose, and the circuit’s performance was erratic. “They failed the whole group,” Solar-Lezama says. “And everybody was like, ‘Why couldn’t you just go to the store and get a switch like normal people do?’”

The next year, in an introductory computer science class, Solar-Lezama was assigned to write a simple program that would send a few lines of text to a printer. Instead, he wrote a program that asked the user a series of questions, each question predicated on the response to the one before. The answer to the final question determined the text that would be sent to the printer.

This time, the program worked perfectly. But “the teacher failed me because that’s not what the assignment was supposed to be,” Solar-Lezama says. “The educational system was not particularly flexible.”

At that point, Solar-Lezama abandoned trying to import his extracurricular interests into the classroom. “I sort of brushed it off,” he recalls. “I was doing my own thing. As long as school didn’t take too much of my time, it was fine.”

So, in 1997, when Solar-Lezama’s father moved the family to College Station, Texas — the Mexican economy was still in the throes of the three-year-old Mexican peso crisis — the 15-year-old Armando began to teach himself calculus and linear algebra.

Accustomed to the autonomy of living in a huge city with a subway he could take anywhere, Solar-Lezama bridled at having to depend on rides from his parents to so much as go to the library. “For the first three years that I was in Texas, I was convinced that as soon as I turned 18, I was going to go back to Mexico,” he says. “Because what was I doing in this place in the middle of nowhere?” He began systematically educating himself in everything he would need to ace the Mexican college entrance exams.

Process for positioning quantum bits in diamond

Quantum computers are experimental devices that offer large speedups on some computational problems. One promising approach to building them involves harnessing nanometer-scale atomic defects in diamond materials.

But practical, diamond-based quantum computing devices will require the ability to position those defects at precise locations in complex diamond structures, where the defects can function as qubits, the basic units of information in quantum computing. In today’s of Nature Communications, a team of researchers from MIT, Harvard University, and Sandia National Laboratories reports a new technique for creating targeted defects, which is simpler and more precise than its predecessors.

In experiments, the defects produced by the technique were, on average, within 50 nanometers of their ideal locations.

“The dream scenario in quantum information processing is to make an optical circuit to shuttle photonic qubits and then position a quantum memory wherever you need it,” says Dirk Englund, an associate professor of electrical engineering and computer science who led the MIT team. “We’re almost there with this. These emitters are almost perfect.”

The new paper has 15 co-authors. Seven are from MIT, including Englund and first author Tim Schröder, who was a postdoc in Englund’s lab when the work was done and is now an assistant professor at the University of Copenhagen’s Niels Bohr Institute. Edward Bielejec led the Sandia team, and physics professor Mikhail Lukin led the Harvard team.

Appealing defects

Quantum computers, which are still largely hypothetical, exploit the phenomenon of quantum “superposition,” or the counterintuitive ability of small particles to inhabit contradictory physical states at the same time. An electron, for instance, can be said to be in more than one location simultaneously, or to have both of two opposed magnetic orientations.

Where a bit in a conventional computer can represent zero or one, a “qubit,” or quantum bit, can represent zero, one, or both at the same time. It’s the ability of strings of qubits to, in some sense, simultaneously explore multiple solutions to a problem that promises computational speedups.

Diamond-defect qubits result from the combination of “vacancies,” which are locations in the diamond’s crystal lattice where there should be a carbon atom but there isn’t one, and “dopants,” which are atoms of materials other than carbon that have found their way into the lattice. Together, the dopant and the vacancy create a dopant-vacancy “center,” which has free electrons associated with it. The electrons’ magnetic orientation, or “spin,” which can be in superposition, constitutes the qubit.

A perennial problem in the design of quantum computers is how to read information out of qubits. Diamond defects present a simple solution, because they are natural light emitters. In fact, the light particles emitted by diamond defects can preserve the superposition of the qubits, so they could move quantum information between quantum computing devices.

Maintain framing of an aerial shot

In recent years, a host of Hollywood blockbusters — including “The Fast and the Furious 7,” “Jurassic World,” and “The Wolf of Wall Street” — have included aerial tracking shots provided by drone helicopters outfitted with cameras.

Those shots required separate operators for the drones and the cameras, and careful planning to avoid collisions. But a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and ETH Zurich hope to make drone cinematography more accessible, simple, and reliable.

At the International Conference on Robotics and Automation later this month, the researchers will present a system that allows a director to specify a shot’s framing — which figures or faces appear where, at what distance. Then, on the fly, it generates control signals for a camera-equipped autonomous drone, which preserve that framing as the actors move.

As long as the drone’s information about its environment is accurate, the system also guarantees that it won’t collide with either stationary or moving obstacles.

“There are other efforts to do autonomous filming with one drone,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and a senior author on the new paper. “They can follow someone, but if the subject turns, say 180 degrees, the drone will end up showing the back of the subject. With our solution, if the subject turns 180 degrees, our drones are able to circle around and keep focus on the face. We are able to specify richer higher-level constraints for the drones. The drones then map the high-level specifications into control and we end up with greater levels of interaction between the drones and the subjects.”

Joining Rus on the paper are Javier Alonso-Mora, who was a postdoc in her group when the work was done and is now an assistant professor of robotics at the Delft University of Technology; Tobias Nägeli, a graduate student at ETH Zurich and his advisor Otmar Hilliges, an assistant professor of computer science; and Alexander Domahidi, CTO of Embotech, an autonomous-systems company that spun out of ETH.

System allocates data center bandwidth more fairly

A webpage today is often the sum of many different components. A user’s home page on a social-networking site, for instance, might display the latest posts from the users’ friends; the associated images, links, and comments; notifications of pending messages and comments on the user’s own posts; a list of events; a list of topics currently driving online discussions; a list of games, some of which are flagged to indicate that it’s the user’s turn; and of course the all-important ads, which the site depends on for revenues.

With increasing frequency, each of those components is handled by a different program running on a different server in the website’s data center. That reduces processing time, but it exacerbates another problem: the equitable allocation of network bandwidth among programs.

Many websites aggregate all of a page’s components before shipping them to the user. So if just one program has been allocated too little bandwidth on the data center network, the rest of the page — and the user — could be stuck waiting for its component.

At the Usenix Symposium on Networked Systems Design and Implementation this week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are presenting a new system for allocating bandwidth in data center networks. In tests, the system maintained the same overall data transmission rate — or network “throughput” — as those currently in use, but it allocated bandwidth much more fairly, completing the download of all of a page’s components up to four times as quickly.

“There are easy ways to maximize throughput in a way that divides up the resource very unevenly,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science and one of two senior authors on the paper describing the new system. “What we have shown is a way to very quickly converge to a good allocation.”

Joining Balakrishnan on the paper are first author Jonathan Perry, a graduate student in electrical engineering and computer science, and Devavrat Shah, a professor of electrical engineering and computer science.

Art facility for prototyping advanced fabrics

Just over a year after its funding award, a new center for the development and commercialization of advanced fabrics is officially opening its headquarters today in Cambridge, Massachusetts, and will be unveiling the first two advanced fabric products to be commercialized from the center’s work.

Advanced Functional Fabrics of America (AFFOA) is a public-private partnership, part of Manufacturing USA, that is working to develop and introduce U.S.-made high-tech fabrics that provide services such as health monitoring, communications, and dynamic design. In the process, AFFOA aims to facilitate economic growth through U.S. fiber and fabric manufacturing.

AFFOA’s national headquarters will open today, with an event featuring Under Secretary of Defense for Acquisition, Technology, and Logistics James MacStravic, U.S. Senator Elizabeth Warren, U.S. Rep. Niki Tsongas, U.S. Rep. Joe Kennedy, Massachusetts Governor Charlie Baker, New Balance CEO Robert DeMartini, MIT President L. Rafael Reif, and AFFOA CEO Yoel Fink. Sample versions of one of the center’s new products, a programmable backpack made of advanced fabric produced in North and South Carolina, will be distributed to attendees at the opening.

AFFOA was created last year with over $300 million in funding from the U.S. and state governments and from academic and corporate partners, to help foster the creation of revolutionary new developments in fabric and fiber-based products. The institute seeks to create “fabrics that see, hear, sense, communicate, store and convert energy, regulate temperature, monitor health, and change color,” says Fink, a professor of materials science and engineering at MIT. In short, he says, AFFOA aims to catalyze the creation of a whole new industry that envisions “fabrics as the new software.”

Under Fink’s leadership, the independent, nonprofit organization has already created a network of more than 100 partners, including much of the fabric manufacturing base in the U.S. as well as startups and universities spread across 28 states.

“AFFOA’s promise reflects the very best of MIT: It’s bold, innovative, and daring,” says MIT President L. Rafael Reif. “It leverages and drives technology to solve complex problems, in service to society. And it draws its strength from a rich network of collaborators — across governments, universities, and industries. It has been inspiring to watch the partnership’s development this past year, and it will be exciting to witness the new frontiers and opportunities it will open.”

A “Moore’s Law” for fabrics

While products that attempt to incorporate electronic functions into fabrics have been conceptualized, most of these have involved attaching various types of patches to existing fabrics. The kinds of fabrics and fibers envisioned by — and already starting to emerge from — AFFOA will have these functions embedded within the fibers themselves.

Referring to the principle that describes the very rapid development of computer chip technology over the last few decades, Fink says AFFOA is dedicated to a “Moore’s Law for fibers” — that is, ensuring that there will be a recurring growth in fiber technology in this newly developing field.

A key element in the center’s approach is to develop the technology infrastructure for advanced, internet-connected fabric products that enable new business models for the fabric industry. With highly functional fabric systems, the ability to offer consumers “fabrics as a service” creates value in the textile industry — moving it from producing goods in a price-competitive market, to practicing recurring revenue models with rapid innovation cycles that are now characteristic of high-margin technology business sectors.

From idea to product

To enable rapid transition from idea to product, a high-tech national product-prototyping ecosystem called the Fabric Innovation Network (FIN) has been assembled. The FIN is made up of small, medium, and large manufacturers and academic centers that have production capabilities allocated to AFFOA projects, which rapidly execute prototypes and pilot manufacturing of advanced fabric products, decreasing time to market and accelerating product innovation. The product prototypes being rolled out today were executed through this network in a matter of weeks.

The new headquarters in Cambridge, which was renovated for this purpose with state and MIT funding, is called a Fabric Discovery Center (FDC). It was designed to support three main thrusts: a startup accelerator and incubator that provides space, tools, and guidance to new companies working to develop new advanced fabric-based products; a section devoted to education, offering students hands-on opportunities to explore this cutting-edge field and develop the skills to become part of it; and the world’s first end-to-end prototyping facility, with advanced computer-assisted design and fabrication tools, to help accelerate new advanced fabric ideas from the concept to functional products.

America opens headquarters steps from MIT campus

These are not your grandmother’s fibers and textiles. These are tomorrow’s functional fabrics — designed and prototyped in Cambridge, Massachusetts, and manufactured across a network of U.S. partners. This is the vision of the new headquarters for the Manufacturing USA institute called Advanced Functional Fabrics of America (AFFOA) that opened Monday at 12 Emily Street, steps away from the MIT campus.

AFFOA headquarters represents a significant MIT investment in advanced manufacturing innovation. This facility includes a Fabric Discovery Center that provides end-to-end prototyping from fiber design to system integration of new textile-based products, and will be used for education and workforce development in the Cambridge and greater Boston community. AFFOA headquarters also includes startup incubation space for companies spun out from MIT and other partners who are innovating advanced fabrics and fibers for applications ranging from apparel and consumer electronics to automotive and medical devices.

MIT was a founding member of the AFFOA team that partnered with the Department of Defense in April 2016 to launch this new institute as a public-private partnership through an independent nonprofit also founded by MIT. AFFOA’s chief executive officer is Yoel Fink. Prior to his current role, Fink led the AFFOA proposal last year as professor of materials science and engineering and director of the Research Laboratory for Electronics at MIT, with his vision to create a “fabric revolution.” That revolution under Fink’s leadership was grounded in new fiber materials and textile manufacturing processes for fabrics that see, hear, sense, communicate, store and convert energy, and monitor health.

From the perspectives of research, education, and entrepreneurship, MIT engagement in AFFOA draws from many strengths. These include the multifunctional drawn fibers developed by Fink and others to include electronic capabilities within fibers that include multiple materials and function as devices. That fiber concept developed at MIT has been applied to key challenges in the defense sector through MIT’s Institute for Soldier Nanotechnology, commercialization through a startup called OmniGuide that is now OmniGuide Surgical for laser surgery devices, and extensions to several new areas including neural probes by Polina Anikeeva, MIT associate professor of materials science and engineering. Beyond these diverse uses of fiber devices, MIT faculty including Greg Rutledge, the Lammot du Pont Professor of Chemical Engineering, have also led innovation in predictive modeling and design of polymer nanofibers, fiber processing and characterization, and self-assembly of woven and nonwoven filters and textiles for diverse applications and industries.

Rutledge coordinates MIT campus engagement in the AFFOA Institute, and notes that “MIT has a range of research and teaching talent that impacts manufacturing of fiber and textile-based products, from designing the fiber to leading the factories of the future. Many of our faculty also have longstanding collaborations with partners in defense and industry on these projects, including with Lincoln Laboratory and the Army’s Natick Soldier Research Development and Engineering Center, so MIT membership in AFFOA is an opportunity to strengthen and grow those networks.”

Faculty at MIT across several departments and schools have also created innovative new product concepts ranging from sweat-responsive sports apparel advanced by Professor Hiroshi Ishii’s group to design of self-folding strands of multi-material fibers by Professor Skylar Tibbits. Professors Neri Oxman and Craig Carter developed new modeling and materials fabrication capabilities that facilitated the first 3-D-printed dress featured at Paris Fashion Week in 2013. Innovations in functional fabrics for health monitoring on projects involving MIT and run using the Fabric Discovery Center could range from targeting human wellness to identifying flaws in the structural integrity of the built environment. In fact, many of these fiber and textile manufacturing technologies and products include active or passive sensing capabilities, highlighting the synergies of MIT participation in several manufacturing institutes that need or use this functionality. Those connections motivated the SENSE.nano symposium in May that launched the first center of excellence in the MIT.nano building that is nearing completion on campus.

Patterns to produce any 3-D structure.

In a 1999 paper, Erik Demaine — now an MIT professor of electrical engineering and computer science, but then an 18-year-old PhD student at the University of Waterloo, in Canada — described an algorithm that could determine how to fold a piece of paper into any conceivable 3-D shape.

It was a milestone paper in the field of computational origami, but the algorithm didn’t yield very practical folding patterns. Essentially, it took a very long strip of paper and wound it into the desired shape. The resulting structures tended to have lots of seams where the strip doubled back on itself, so they weren’t very sturdy.

At the Symposium on Computational Geometry in July, Demaine and Tomohiro Tachi of the University of Tokyo will announce the completion of a quest that began with that 1999 paper: a universal algorithm for folding origami shapes that guarantees a minimum number of seams.

“In 1999, we proved that you could fold any polyhedron, but the way that we showed how to do it was very inefficient,” Demaine says. “It’s efficient if your initial piece of paper is super-long and skinny. But if you were going to start with a square piece of paper, then that old method would basically fold the square paper down to a thin strip, wasting almost all the material. The new result promises to be much more efficient. It’s a totally different strategy for thinking about how to make a polyhedron.”

Demaine and Tachi are also working to implement the algorithm in a new version of Origamizer, the free software for generating origami crease patterns whose first version Tachi released in 2008.

Maintaining boundaries

The researchers’ algorithm designs crease patterns for producing any polyhedron — that is, a 3-D surface made up of many flat facets. Computer graphics software, for instance, models 3-D objects as polyhedra consisting of many tiny triangles. “Any curved shape you could approximate with lots of little flat sides,” Demaine explains.

Technically speaking, the guarantee that the folding will involve the minimum number of seams means that it preserves the “boundaries” of the original piece of paper. Suppose, for instance, that you have a circular piece of paper and want to fold it into a cup. Leaving a smaller circle at the center of the piece of paper flat, you could bunch the sides together in a pleated pattern; in fact, some water-cooler cups are manufactured on this exact design.

In this case, the boundary of the cup — its rim — is the same as that of the unfolded circle — its outer edge. The same would not be true with the folding produced by Demaine and his colleagues’ earlier algorithm. There, the cup would consist of a thin strip of paper wrapped round and round in a coil — and it probably wouldn’t hold water.

“The new algorithm is supposed to give you much better, more practical foldings,” Demaine says. “We don’t know how to quantify that mathematically, exactly, other than it seems to work much better in practice. But we do have one mathematical property that nicely distinguishes the two methods. The new method keeps the boundary of the original piece of paper on the boundary of the surface you’re trying to make. We call this watertightness.”

A closed surface — such as a sphere — doesn’t have a boundary, so an origami approximation of it will require a seam where boundaries meet. But “the user gets to choose where to put that boundary,” Demaine says. “You can’t get an entire closed surface to be watertight, because the boundary has to be somewhere, but you get to choose where that is.”

Analysis of laparoscopic procedures

Laparoscopy is a surgical technique in which a fiber-optic camera is inserted into a patient’s abdominal cavity to provide a video feed that guides the surgeon through a minimally invasive procedure.

Laparoscopic surgeries can take hours, and the video generated by the camera — the laparoscope — is often recorded. Those recordings contain a wealth of information that could be useful for training both medical providers and computer systems that would aid with surgery, but because reviewing them is so time consuming, they mostly sit idle.

Researchers at MIT and Massachusetts General Hospital hope to change that, with a new system that can efficiently search through hundreds of hours of video for events and visual features that correspond to a few training examples.

In work they presented at the International Conference on Robotics and Automation this month, the researchers trained their system to recognize different stages of an operation, such as biopsy, tissue removal, stapling, and wound cleansing.

But the system could be applied to any analytical question that doctors deem worthwhile. It could, for instance, be trained to predict when particular medical instruments — such as additional staple cartridges — should be prepared for the surgeon’s use, or it could sound an alert if a surgeon encounters rare, aberrant anatomy.

“Surgeons are thrilled by all the features that our work enables,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and senior author on the paper. “They are thrilled to have the surgical tapes automatically segmented and indexed, because now those tapes can be used for training. If we want to learn about phase two of a surgery, we know exactly where to go to look for that segment. We don’t have to watch every minute before that. The other thing that is extraordinarily exciting to the surgeons is that in the future, we should be able to monitor the progression of the operation in real-time.”

Joining Rus on the paper are first author Mikhail Volkov, who was a postdoc in Rus’ group when the work was done and is now a quantitative analyst at SMBC Nikko Securities in Tokyo; Guy Rosman, another postdoc in Rus’ group; and Daniel Hashimoto and Ozanan Meireles of Massachusetts General Hospital (MGH).