Monthly Archives: January 2017

Develop technology based tools to address racism and bias

In July 2016, feeling frustrated about violence in the news and continued social and economic roadblocks to progress for minorities, members of the Black Alumni of MIT (BAMIT) were galvanized by a letter to the MIT community from President L. Rafael Reif. Responding to a recent series of tragic shootings, he asked “What are we to do?”

BAMIT members gathered in Washington to brainstorm a response, and out of that session emerged a plan to organize a hackathon aimed at finding technology-based solutions to address discrimination. The event, held at MIT last month, was called “Hacking Discrimination” and spearheaded by Elaine Harris ’78 and Lisa Egbuonu-Davis ’79 in partnership with the MIT Alumni Association.

The 11 pitches presented during the two-day hackathon covered a wide range of issues affecting communities of color, including making routine traffic stops less harmful for motorists and police officers, preventing bias in the hiring process by creating a professional profile using a secure blockchain system, flagging unconscious biases using haptic (touch-based) feedback and augmented reality, and providing advice for those who experience discrimination.

Hackathon winners were selected in three categories – Innovation, Impact, and Storytelling – and received gifts valued at $1,500. The teams also received advice from local experts on their topics throughout the second day of hacking.

The Innovation prize was awarded to Taste Voyager, a platform that enables individuals or families to host guests and foster cultural understanding over a home-cooked meal. The Impact prize went to Rahi, a smartphone app that makes shopping easier for recipients of the federally funded Women, Infant, and Children food-assistance program. The Storytelling prize was awarded to Just-Us and Health, which uses surveys to track the effects of discrimination in neighborhoods.

As Randal Pinkett SM ’98, MBA ’98, PhD ’02 said in his keynote speech, “Technology alone won’t solve bias in the U.S.,” and the hackathon made sure to focus on technology’s human users. Under the guidance of Fahad Punjwani, an MIT graduate student in integrated design and management, the event’s mentors ensured that participants considered not just how to deploy their technologies but also the people they aimed to serve.

With a human-centered design process as the guideline, Punjwani encouraged participants to speak with people affected by the problem and carefully define their target audience. For some, including the Taste Voyager team, which began the hackathon as Immigrant Integration, this resulted in an overhaul of the project. Examining their target audience led the team to switch their focus from helping immigrants integrate to creating a way for people of different backgrounds to connect and help each other in a safe space.

“We hacked the topic of our topic,” said Jennifer Williams of the Lincoln Laboratory’s Human Language Technology group, who led the team.

The Rahi team, which was led by Hildreth England, assistant director of the Media Lab’s Open Agriculture Initiative, also focused on the user as it attempted to improve the national Women, Infants, and Children (WIC) nutrition program by acknowledging the racial and ethnic inequalities embedded in the food system. For example, according to Feeding America, one in five African-American and Latino households is food insecure — lacking consistent and adequate access to affordable and nutritious food — compared to one in 10 Caucasian households.

Academic success despite an inauspicious start

When Armando Solar-Lezama was a third grader in Mexico City, his science class did a unit on electrical circuits. The students were divided into teams of three, and each team member had to bring in a light bulb, a battery, or a switch.

Solar-Lezama, whose father worked for an electronics company, volunteered to provide the switch. Using electrical components his father had brought home from work, Solar-Lezama built a “flip-flop” circuit and attached it to a touch-sensitive field effect transistor. When the circuit was off, touching the transistor turned it on, and when it was on, touching the transistor turned it off. “I was pretty proud of my circuit,” says Solar-Lezama, now an MIT professor of electrical engineering and computer science.

By the time he got to school, however, one of his soldered connections had come loose, and the circuit’s performance was erratic. “They failed the whole group,” Solar-Lezama says. “And everybody was like, ‘Why couldn’t you just go to the store and get a switch like normal people do?’”

The next year, in an introductory computer science class, Solar-Lezama was assigned to write a simple program that would send a few lines of text to a printer. Instead, he wrote a program that asked the user a series of questions, each question predicated on the response to the one before. The answer to the final question determined the text that would be sent to the printer.

This time, the program worked perfectly. But “the teacher failed me because that’s not what the assignment was supposed to be,” Solar-Lezama says. “The educational system was not particularly flexible.”

At that point, Solar-Lezama abandoned trying to import his extracurricular interests into the classroom. “I sort of brushed it off,” he recalls. “I was doing my own thing. As long as school didn’t take too much of my time, it was fine.”

So, in 1997, when Solar-Lezama’s father moved the family to College Station, Texas — the Mexican economy was still in the throes of the three-year-old Mexican peso crisis — the 15-year-old Armando began to teach himself calculus and linear algebra.

Accustomed to the autonomy of living in a huge city with a subway he could take anywhere, Solar-Lezama bridled at having to depend on rides from his parents to so much as go to the library. “For the first three years that I was in Texas, I was convinced that as soon as I turned 18, I was going to go back to Mexico,” he says. “Because what was I doing in this place in the middle of nowhere?” He began systematically educating himself in everything he would need to ace the Mexican college entrance exams.

Process for positioning quantum bits in diamond

Quantum computers are experimental devices that offer large speedups on some computational problems. One promising approach to building them involves harnessing nanometer-scale atomic defects in diamond materials.

But practical, diamond-based quantum computing devices will require the ability to position those defects at precise locations in complex diamond structures, where the defects can function as qubits, the basic units of information in quantum computing. In today’s of Nature Communications, a team of researchers from MIT, Harvard University, and Sandia National Laboratories reports a new technique for creating targeted defects, which is simpler and more precise than its predecessors.

In experiments, the defects produced by the technique were, on average, within 50 nanometers of their ideal locations.

“The dream scenario in quantum information processing is to make an optical circuit to shuttle photonic qubits and then position a quantum memory wherever you need it,” says Dirk Englund, an associate professor of electrical engineering and computer science who led the MIT team. “We’re almost there with this. These emitters are almost perfect.”

The new paper has 15 co-authors. Seven are from MIT, including Englund and first author Tim Schröder, who was a postdoc in Englund’s lab when the work was done and is now an assistant professor at the University of Copenhagen’s Niels Bohr Institute. Edward Bielejec led the Sandia team, and physics professor Mikhail Lukin led the Harvard team.

Appealing defects

Quantum computers, which are still largely hypothetical, exploit the phenomenon of quantum “superposition,” or the counterintuitive ability of small particles to inhabit contradictory physical states at the same time. An electron, for instance, can be said to be in more than one location simultaneously, or to have both of two opposed magnetic orientations.

Where a bit in a conventional computer can represent zero or one, a “qubit,” or quantum bit, can represent zero, one, or both at the same time. It’s the ability of strings of qubits to, in some sense, simultaneously explore multiple solutions to a problem that promises computational speedups.

Diamond-defect qubits result from the combination of “vacancies,” which are locations in the diamond’s crystal lattice where there should be a carbon atom but there isn’t one, and “dopants,” which are atoms of materials other than carbon that have found their way into the lattice. Together, the dopant and the vacancy create a dopant-vacancy “center,” which has free electrons associated with it. The electrons’ magnetic orientation, or “spin,” which can be in superposition, constitutes the qubit.

A perennial problem in the design of quantum computers is how to read information out of qubits. Diamond defects present a simple solution, because they are natural light emitters. In fact, the light particles emitted by diamond defects can preserve the superposition of the qubits, so they could move quantum information between quantum computing devices.

Maintain framing of an aerial shot

In recent years, a host of Hollywood blockbusters — including “The Fast and the Furious 7,” “Jurassic World,” and “The Wolf of Wall Street” — have included aerial tracking shots provided by drone helicopters outfitted with cameras.

Those shots required separate operators for the drones and the cameras, and careful planning to avoid collisions. But a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and ETH Zurich hope to make drone cinematography more accessible, simple, and reliable.

At the International Conference on Robotics and Automation later this month, the researchers will present a system that allows a director to specify a shot’s framing — which figures or faces appear where, at what distance. Then, on the fly, it generates control signals for a camera-equipped autonomous drone, which preserve that framing as the actors move.

As long as the drone’s information about its environment is accurate, the system also guarantees that it won’t collide with either stationary or moving obstacles.

“There are other efforts to do autonomous filming with one drone,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and a senior author on the new paper. “They can follow someone, but if the subject turns, say 180 degrees, the drone will end up showing the back of the subject. With our solution, if the subject turns 180 degrees, our drones are able to circle around and keep focus on the face. We are able to specify richer higher-level constraints for the drones. The drones then map the high-level specifications into control and we end up with greater levels of interaction between the drones and the subjects.”

Joining Rus on the paper are Javier Alonso-Mora, who was a postdoc in her group when the work was done and is now an assistant professor of robotics at the Delft University of Technology; Tobias Nägeli, a graduate student at ETH Zurich and his advisor Otmar Hilliges, an assistant professor of computer science; and Alexander Domahidi, CTO of Embotech, an autonomous-systems company that spun out of ETH.

System allocates data center bandwidth more fairly

A webpage today is often the sum of many different components. A user’s home page on a social-networking site, for instance, might display the latest posts from the users’ friends; the associated images, links, and comments; notifications of pending messages and comments on the user’s own posts; a list of events; a list of topics currently driving online discussions; a list of games, some of which are flagged to indicate that it’s the user’s turn; and of course the all-important ads, which the site depends on for revenues.

With increasing frequency, each of those components is handled by a different program running on a different server in the website’s data center. That reduces processing time, but it exacerbates another problem: the equitable allocation of network bandwidth among programs.

Many websites aggregate all of a page’s components before shipping them to the user. So if just one program has been allocated too little bandwidth on the data center network, the rest of the page — and the user — could be stuck waiting for its component.

At the Usenix Symposium on Networked Systems Design and Implementation this week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are presenting a new system for allocating bandwidth in data center networks. In tests, the system maintained the same overall data transmission rate — or network “throughput” — as those currently in use, but it allocated bandwidth much more fairly, completing the download of all of a page’s components up to four times as quickly.

“There are easy ways to maximize throughput in a way that divides up the resource very unevenly,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science and one of two senior authors on the paper describing the new system. “What we have shown is a way to very quickly converge to a good allocation.”

Joining Balakrishnan on the paper are first author Jonathan Perry, a graduate student in electrical engineering and computer science, and Devavrat Shah, a professor of electrical engineering and computer science.