Category Archives: Internet

Artificial Intelligence Lab speeds data transfer

There are few things more frustrating than trying to use your phone on a crowded network. With phone usage growing faster than wireless spectrum, we’re all now fighting over smaller and smaller bits of bandwidth. Spectrum crunch is such a big problem that the White House is getting involved, recently announcing both a $400 million research initiative and a $4 million global competition devoted to the issue.

But researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) say that they have a possible solution. In a new paper, a team led by professor Dina Katabi demonstrate a system called MegaMIMO 2.0 that can transfer wireless data more than three times faster than existing systems while also doubling the range of the signal.

The soon-to-be-commercialized system’s key insight is to coordinate multiple access points at the same time, on the same frequency, without creating interference. This means that MegaMIMO 2.0 could dramatically improve the speed and strength of wireless networks, particularly at high-usage events like concerts, conventions and football games.

“In today’s wireless world, you can’t solve spectrum crunch by throwing more transmitters at the problem, because they will all still be interfering with one another,” says Ezzeldin Hamed, a PhD student who is lead author on a new paper on the topic. “The answer is to have all those access points work with each other simultaneously to efficiently use the available spectrum.”

To test MegaMIMO 2.0’s performance, the researchers created a mock conference room with a set of four laptops that each roamed the space atop Roomba robots. The experiments found that the system could increase the devices’ data-transfer speed 330 percent.

MegaMIMO 2.0’s hardware is the size of a standard router, and consists of a processor, a real-time baseband processing system, and a transceiver board.

Katabi and Hamed co-wrote the paper with Hariharan Rahul SM ’99, PhD ’13, an alum of Katabi’s group and visiting researcher with the group, as well as visiting student Mohammed A. Albdelghany. Rahul will present the paper at next week’s conference for the Association for Computing Machinery’s Special Interest Group on Data Communications (SIGCOMM 16).

How it works

The main reason that your smartphone works so speedily is multiple-input multiple-output (MIMO), which means that it uses several transmitters and receivers at the same time. Radio waves bounce off surfaces and therefore arrive at the receivers at slightly different times; devices with multiple receivers, then, are able to combine the various streams to transmit data much faster. For example, a router with three antennas works twice as fast as one with two antennas.

But in a world of limited bandwidth, these speeds are still not as fast as they could be, and so in recent years researchers have searched for the wireless industry’s Holy Grail: being able to coordinate several routers at once so that they can triangulate the data even faster and more consistently.

“The problem is that, just like how two radio stations can’t play music over the same frequency at the same time, multiple routers cannot transfer data on the same chunk of spectrum without creating major interference that muddies the signal,” says Rahul.

Flexible traffic management

Like all data networks, the networks that connect servers in giant server farms, or servers and workstations in large organizations, are prone to congestion. When network traffic is heavy, packets of data can get backed up at network routers or dropped altogether.

Also like all data networks, big private networks have control algorithms for managing network traffic during periods of congestion. But because the routers that direct traffic in a server farm need to be superfast, the control algorithms are hardwired into the routers’ circuitry. That means that if someone develops a better algorithm, network operators have to wait for a new generation of hardware before they can take advantage of it.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and five other organizations hope to change that, with routers that are programmable but can still keep up with the blazing speeds of modern data networks. The researchers outline their system in a pair of papers being presented at the annual conference of the Association for Computing Machinery’s Special Interest Group on Data Communication.

“This work shows that you can achieve many flexible goals for managing traffic, while retaining the high performance of traditional routers,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science at MIT. “Previously, programmability was achievable, but nobody would use it in production, because it was a factor of 10 or even 100 slower.”

“You need to have the ability for researchers and engineers to try out thousands of ideas,” he adds. “With this platform, you become constrained not by hardware or technological limitations, but by your creativity. You can innovate much more rapidly.”

The first author on both papers is Anirudh Sivaraman, an MIT graduate student in electrical engineering and computer science, advised by both Balakrishnan and Mohammad Alizadeh, the TIBCO Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT, who are coauthors on both papers. They’re joined by colleagues from MIT, the University of Washington, Barefoot Networks, Microsoft Research, Stanford University, and Cisco Systems.

Privacy in genomic databases

Genome-wide association studies, which try to find correlations between particular genetic variations and disease diagnoses, are a staple of modern medical research.

But because they depend on databases that contain people’s medical histories, they carry privacy risks. An attacker armed with genetic information about someone — from, say, a skin sample — could query a database for that person’s medical data. Even without the skin sample, an attacker who was permitted to make repeated queries, each informed by the results of the last, could, in principle, extract private data from the database.

In the latest issue of the journal Cell Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Indiana University at Bloomington describe a new system that permits database queries for genome-wide association studies but reduces the chances of privacy compromises to almost zero.

It does that by adding a little bit of misinformation to the query results it returns. That means that researchers using the system could begin looking for drug targets with slightly inaccurate data. But in most cases, the answers returned by the system will be close enough to be useful.

And an instantly searchable online database of genetic data, even one that returned slightly inaccurate information, could make biomedical research much more efficient.

“Right now, what a lot of people do, including the NIH, for a long time, is take all their data — including, often, aggregate data, the statistics we’re interested in protecting — and put them into repositories,” says Sean Simmons, an MIT postdoc in mathematics and first author on the new paper. “And you have to go through a time-consuming process to get access to them.”

That process involves a raft of paperwork, including explanations of how the research enabled by the repositories will contribute to the public good, which requires careful review. “We’ve waited months to get access to various repositories,” says Bonnie Berger, the Simons Professor of Mathematics at MIT, who was Simmons’s thesis advisor and is the corresponding author on the paper. “Months.”

Bring the noise

Genome-wide association studies generally rely on genetic variations called single-nucleotide polymorphisms, or SNPs (pronounced “snips”). A SNP is a variation of one nucleotide, or DNA “letter,” at a specified location in the genome. Millions of SNPs have been identified in the human population, and certain combinations of SNPs can serve as proxies for larger stretches of DNA that tend to be conserved among individuals.

The new system, which Berger and Simmons developed together with Cenk Sahinalp, a professor of computer science at Indiana University, implements a technique called “differential privacy,” which has been a major area of cryptographic research in recent years. Differential-privacy techniques add a little bit of noise, or random variation, to the results of database searches, to confound algorithms that would seek to extract private information from the results of several, tailored, sequential searches.

Practical applications for non native English speakers

After thousands of hours of work, MIT researchers have released the first major database of fully annotated English sentences written by non-native speakers.

The researchers who led the project had already shown that the grammatical quirks of non-native speakers writing in English could be a source of linguistic insight. But they hope that their dataset could also lead to applications that would improve computers’ handling of spoken or written language of non-native English speakers.

“English is the most used language on the Internet, with over 1 billion speakers,” says Yevgeni Berzak, a graduate student in electrical engineering and computer science, who led the new project. “Most of the people who speak English in the world or produce English text are non-native speakers. This characteristic is often overlooked when we study English scientifically or when we do natural-language processing for English.”

Most natural-language-processing systems, which enable smartphone and other computer applications to process requests phrased in ordinary language, are based on machine learning, in which computer systems look for patterns in huge sets of training data. “If you want to handle noncanonical learner language, in terms of the training material that’s available to you, you can only train on standard English,” Berzak explains.

Systems trained on nonstandard English, on the other hand, could be better able to handle the idiosyncrasies of non-native English speakers, such as tendencies to drop or add prepositions, to substitute particular tenses for others, or to misuse particular auxiliary verbs. Indeed, the researchers hope that their work could lead to grammar-correction software targeted to native speakers of other languages.

Diagramming sentences

The researchers’ dataset consists of 5,124 sentences culled from exam essays written by students of English as a second language (ESL). The sentences were drawn, in approximately equal distribution, from native speakers of 10 languages that are the primary tongues of roughly 40 percent of the world’s population.

Every sentence in the dataset includes at least one grammatical error. The original source of the sentences was a collection made public by Cambridge University, which included annotation of the errors, but no other grammatical or syntactic information.

Helping to solve problems in areas

“Julia is a great tool.” That’s what New York University professor of economics and Nobel laureate Thomas J. Sargent told 250 engineers, computer scientists, programmers, and data scientists at the third annual JuliaCon held at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

If you have not yet heard of Julia, it is not a “who,” but a “what.” Developed at CSAIL, the MIT Department of Mathematics, and throughout the Julia community, it is a fast-maturing programming language developed to be simple to learn, highly dynamic, operational at the speed of C, and ranging in use from general programming to highly quantitative uses such as scientific computing, machine learning, data mining, large-scale linear algebra, and distributed and parallel computing. The language was launched open-source in 2012 and has begun to amass a large following of users and contributors.

This year’s JuliaCon, held June 21-25, was the biggest yet, and featured presentations describing how Julia is being used to solve complex problems in areas as diverse as economic modeling, spaceflight, bioinformatics, and many others.

“We are very excited about Julia because our models are complicated,” said Sargent, who is also a senior fellow at the Hoover Institution. “It’s easy to write the problem down, but it’s hard to solve it — especially if our model is high dimensional. That’s why we need Julia. Figuring out how to solve these problems requires some creativity. The guys who deserve a lot of the credit are the ones who figured out how to put this into a computer. This is a walking advertisement for Julia.” Sargent added that the reason Julia is important is because the next generation of macroeconomic models is very computationally intensive, using high-dimensional models and fitting them over extremely large data sets.

Sargent was awarded the Nobel Memorial Prize in Economic Sciences in 2011 for his work on macroeconomics. Together with John Stachurski he founded quantecon.net, a Julia- and Python-based learning platform for quantitative economics focusing on algorithms and numerical methods for studying economic problems as well as coding skills.

The Julia programming language was created and open-sourced thanks, in part, to a 2012 innovation grant awarded by the MIT Deshapnde Center for Technological Innovation. Julia combines the functionality of quantitative environments such as Matlab, R, SPSS, Stata, SAS, and Python with the speed of production programming languages like Java and C++ to solve big data and analytics problems. It delivers dramatic improvements in simplicity, speed, capacity, and productivity for data scientists, algorithmic traders, quants, scientists, and engineers who need to solve massive computation problems quickly and accurately. The number of Julia users has grown dramatically during the last five years, doubling every nine months. It is taught at MIT, Stanford University, and dozens of universities worldwide. Julia 0.5 will launch this month and Julia 1.0 in 2017.

The basis for machine learning systems decisions

In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts.

But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.

At the Association for Computational Linguistics’ Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions.

“In real-world applications, sometimes people really want to know why the model makes the predictions it does,” says Tao Lei, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “One major reason that doctors don’t trust machine-learning methods is that there’s no evidence.”

“It’s not only the medical domain,” adds Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Lei’s thesis advisor. “It’s in any domain where the cost of making the wrong prediction is very high. You need to justify why you did it.”

“There’s a broader aspect to this work, as well,” says Tommi Jaakkola, an MIT professor of electrical engineering and computer science and the third coauthor on the paper. “You may not want to just verify that the model is making the prediction in the right way; you might also want to exert some influence in terms of the types of predictions that it should make. How does a layperson communicate with a complex model that’s trained with algorithms that they know nothing about? They might be able to tell you about the rationale for a particular prediction. In that sense it opens up a different way of communicating with the model.”

The deflection of light particles passing through animal tissue

MIT researchers have developed a technique for recovering visual information from light that has scattered because of interactions with the environment — such as passing through human tissue.

The technique could lead to medical-imaging systems that use visible light, which carries much more information than X-rays or ultrasound waves, or to computer vision systems that work in fog or drizzle. The development of such vision systems has been a major obstacle to self-driving cars.

In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A  — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.

From that information, the researchers’ algorithms were able to reconstruct an accurate image of the pattern cut into the mask.

“The reason our eyes are sensitive only in this narrow part of the spectrum is because this is where light and matter interact most,” says Guy Satat, a graduate student at the MIT Media Lab and first author on the new paper. “This is why X-ray is able to go inside the body, because there is very little interaction. That’s why it can’t distinguish between different types of tissue, or see bleeding, or see oxygenated or deoxygenated blood.”

The imaging technique’s potential applications in automotive sensing may be even more compelling than those in medical imaging, however. Many experimental algorithms for guiding autonomous vehicles are highly reliable under good illumination, but they fall apart completely in fog or drizzle; computer vision systems misinterpret the scattered light as having reflected off of objects that don’t exist. The new technique could address that problem.

Satat’s coauthors on the new paper, published today in Scientific Reports, are three other members of the Media Lab’s Camera Culture group: Ramesh Raskar, the group’s leader, Satat’s thesis advisor, and an associate professor of media arts and sciences; Barmak Heshmat, a research scientist; and Dan Raviv, a postdoc.

Expanding circles

Like many of the Camera Culture group’s projects, the new system relies on a pulsed laser that emits ultrashort bursts of light, and a high-speed camera that can distinguish the arrival times of different groups of photons, or light particles. When a light burst reaches a scattering medium, such as a tissue phantom, some photons pass through unmolested; some are only slightly deflected from a straight path; and some bounce around inside the medium for a comparatively long time. The first photons to arrive at the sensor have thus undergone the least scattering; the last to arrive have undergone the most.

Information theory and developed early time sharing computers

Robert “Bob” Fano, a professor emeritus in the Department of Electrical Engineering and Computer Science (EECS) whose work helped usher in the personal computing age, died in Naples, Florida on July 13. He was 98.

During his time on the faculty at MIT, Fano conducted research across multiple disciplines, including information theory, networks, electrical engineering and radar technologies. His work on “time-sharing” — systems that allow multiple people to use a computer at the same time — helped pave the way for the more widespread use of computers in society.

Much of his early work in information theory has directly impacted modern technologies. His research with Claude Shannon, for example, spurred data-compression techniques like Huffman coding that are used in today’s high-definition TVs and computer networks.

In 1961, Fano and Fernando Corbató, professor emeritus in EECS, developed the Compatible Time-Sharing System (CTSS), one of the earliest time-sharing systems. The success of CTSS helped convince MIT to launch Project MAC, a pivotal early center for computing research for which Fano served as its founding director. Project MAC has since dramatically expanded to become MIT’s largest interdepartmental research lab, the Computer Science and Artificial Intelligence Laboratory (CSAIL).

“Bob did pioneering work in computer science at a time when many people viewed the field as a curiosity rather than a rigorous academic discipline,” CSAIL Director Daniela Rus says. “None of our work here would have been possible without his passion, insight, and drive.”

Fano was the Ford Professor of Engineering in EECS and a dedicated teacher who would often labor into the late hours of the morning, working on new lectures. He was also a member of multiple research labs at MIT, including the Laboratory for Computer Science, the Research Laboratory for Electronics, the MIT Radiation Laboratory, and the MIT Lincoln Laboratory. He helped create MIT’s first official curriculum for computer science, which is now the most popular major at the Institute.

In many respects, Fano was one of the world’s first open-source advocates. He frequently described computing as a public utility that, like water or electricity, should be accessible to all. His writings in the 1960s often discussed computing’s place in society, and predated today’s debates about the ethical implications of technology.

“One must consider the security of a system that may hold in its mass memory detailed information on individuals and organizations,” he wrote in a 1966 paper he co-authored with Corbató. “How will access to the utility be controlled? Who will regulate its use?”

A native of Italy, Fano studied at the School of Engineering of Torino before moving to the United States in 1939. He earned both his bachelor’s degree (1941) and his doctorate (1947) from MIT in electrical engineering, and was a member of the MIT faculty from 1947 until 1984.

During World War II, Fano worked on microwave components at the MIT Radiation Laboratory and on radar technologies at the Lincoln Lab. He also served as associate head of EECS from 1971 to 1974.

Strengthen the intersection of policy and technology

“When you’re part of a community, you want to leave it better than you found it,” says Keertan Kini, an MEng student in the Department of Electrical Engineering, or Course 6. That philosophy has guided Kini throughout his years at MIT, as he works to improve policy both inside and out of MIT.

As a member of the Undergraduate Student Advisory Group, former chair of the Course 6 Underground Guide Committee, member of the Internet Policy Research Initiative (IPRI), and of the Advanced Network Architecture group, Kini’s research focus has been in finding ways that technology and policy can work together. As Kini puts it, “there can be unintended consequences when you don’t have technology makers who are talking to policymakers and you don’t have policymakers talking to technologists.” His goal is to allow them to talk to each other.

At 14, Kini first started to get interested in politics. He volunteered for President Obama’s 2008 campaign, making calls and putting up posters. “That was the point I became civically engaged,” says Kini. After that, he was campaigning for a ballot initiative to raise more funding for his high school, and he hasn’t stopped being interested in public policy since.

High school was also where Kini became interested in computer science. He took a computer science class in high school on the recommendation of his sister, and in his senior year, he started watching computer science lectures on MIT OpenCourseWare (OCW) by Hal Abelson, a professor in MIT’s Department of Electrical Engineering and Computer Science.

“That lecture reframed what computer science was. I loved it,” Kini recalls. “The professor said ‘it’s not about computers, and it’s not about science’. It might be an art or engineering, but it’s not science, because what we’re working with are idealized components, and ultimately the power of what we can actually achieve with them is not based so much on physical limitations so much as the limitations of the mind.”

In part thanks to Abelson’s OCW lectures, Kini came to MIT to study electrical engineering and computer science. Kini is currently pursuing an MEng in electrical engineering and computer science, a fifth-year master’s program following his undergraduate studies in electrical engineering and computer science.

Preserving their fundamental mathematical relationships

One way to handle big data is to shrink it. If you can identify a small subset of your data set that preserves its salient mathematical relationships, you may be able to perform useful analyses on it that would be prohibitively time consuming on the full set.

The methods for creating such “coresets” vary according to application, however. Last week, at the Annual Conference on Neural Information Processing Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and the University of Haifa in Israel presented a new coreset-generation technique that’s tailored to a whole family of data analysis tools with applications in natural-language processing, computer vision, signal processing, recommendation systems, weather prediction, finance, and neuroscience, among many others.

“These are all very general algorithms that are used in so many applications,” says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper. “They’re fundamental to so many problems. By figuring out the coreset for a huge matrix for one of these tools, you can enable computations that at the moment are simply not possible.”

As an example, in their paper the researchers apply their technique to a matrix — that is, a table — that maps every article on the English version of Wikipedia against every word that appears on the site. That’s 1.4 million articles, or matrix rows, and 4.4 million words, or matrix columns.

That matrix would be much too large to analyze using low-rank approximation, an algorithm that can deduce the topics of free-form texts. But with their coreset, the researchers were able to use low-rank approximation to extract clusters of words that denote the 100 most common topics on Wikipedia. The cluster that contains “dress,” “brides,” “bridesmaids,” and “wedding,” for instance, appears to denote the topic of weddings; the cluster that contains “gun,” “fired,” “jammed,” “pistol,” and “shootings” appears to designate the topic of shootings.

Joining Rus on the paper are Mikhail Volkov, an MIT postdoc in electrical engineering and computer science, and Dan Feldman, director of the University of Haifa’s Robotics and Big Data Lab and a former postdoc in Rus’s group.