Category Archives: Internet

Engineering senior Caroline Colbert has found herself

During January of her junior year at MIT, Caroline Colbert chose to do a winter externship at Massachusetts General Hospital (MGH). Her job was to shadow the radiation oncology staff, including the doctors that care for patients and medical physicists that design radiation treatment plans.

Colbert, now a senior in the Department of Nuclear Science and Engineering (NSE), had expected to pursue a career in nuclear power. But after working in a medical environment, she changed her plans.

She stayed at MGH to work on building a model to automate the generation of treatment plans for patients who will undergo a form of radiation therapy called volumetric-modulated arc therapy (VMAT). The work was so interesting that she is still involved with it and has now decided to pursue a doctoral degree in medical physics, a field that allows her to blend her training in nuclear science and engineering with her interest in medical technologies.

She’s even zoomed in on schools with programs that have accreditation from the Commission on Accreditation of Medical Physics Graduate Programs so she’ll have the option of having a more direct impact on patients. “I don’t know yet if I’ll be more interested in clinical work, research, or both,” she says. “But my hope is to work in a hospital setting.”

Many NSE students and faculty focus on nuclear energy technologies. But, says Colbert, “the department is really supportive of students who want to go into other industries.”

It was as a middle school student that Colbert first became interested in engineering. Later, in a chemistry class, a lesson about nuclear decay set her on a path towards nuclear science and engineering. “I thought it was so cool that one element can turn into another,” she says. “You think of elements as the fundamental building blocks of the physical world.”

Colbert’s parents, both from the Boston area, had encouraged her to apply to MIT. They also encouraged her towards the medical field. “They loved the idea of me being a doctor, and then when I decided on nuclear engineering, they wanted me to look into medical physics,” she says. “I was trying to make my own way. But when I did look seriously into medical physics, I had to admit that my parents were right.”

Identifies letters printed on first nine pages

MIT researchers and their colleagues are designing an imaging system that can read closed books.

In the latest issue of Nature Communications, the researchers describe a prototype of the system, which they tested on a stack of papers, each with one letter printed on it. The system was able to correctly identify the letters on the top nine sheets.

“The Metropolitan Museum in New York showed a lot of interest in this, because they want to, for example, look into some antique books that they don’t even want to touch,” says Barmak Heshmat, a research scientist at the MIT Media Lab and corresponding author on the new paper. He adds that the system could be used to analyze any materials organized in thin layers, such as coatings on machine parts or pharmaceuticals.

Heshmat is joined on the paper by Ramesh Raskar, an associate professor of media arts and sciences; Albert Redo Sanchez, a research specialist in the Camera Culture group at the Media Lab; two of the group’s other members; and by Justin Romberg and Alireza Aghasi of Georgia Tech.

The MIT researchers developed the algorithms that acquire images from individual sheets in stacks of paper, and the Georgia Tech researchers developed the algorithm that interprets the often distorted or incomplete images as individual letters. “It’s actually kind of scary,” Heshmat says of the letter-interpretation algorithm. “A lot of websites have these letter certifications [captchas] to make sure you’re not a robot, and this algorithm can get through a lot of them.”

Timing terahertz

The system uses terahertz radiation, the band of electromagnetic radiation between microwaves and infrared light, which has several advantages over other types of waves that can penetrate surfaces, such as X-rays or sound waves. Terahertz radiation has been widely researched for use in security screening, because different chemicals absorb different frequencies of terahertz radiation to different degrees, yielding a distinctive frequency signature for each. By the same token, terahertz frequency profiles can distinguish between ink and blank paper, in a way that X-rays can’t.

Terahertz radiation can also be emitted in such short bursts that the distance it has traveled can be gauged from the difference between its emission time and the time at which reflected radiation returns to a sensor. That gives it much better depth resolution than ultrasound.

The system exploits the fact that trapped between the pages of a book are tiny air pockets only about 20 micrometers deep. The difference in refractive index — the degree to which they bend light — between the air and the paper means that the boundary between the two will reflect terahertz radiation back to a detector.

In the researchers’ setup, a standard terahertz camera emits ultrashort bursts of radiation, and the camera’s built-in sensor detects their reflections. From the reflections’ time of arrival, the MIT researchers’ algorithm can gauge the distance to the individual pages of the book.

Empowers cancer treatment with machine learning

Regina Barzilay is working with MIT students and medical doctors in an ambitious bid to revolutionize cancer care. She is relying on a tool largely unrecognized in the oncology world but deeply familiar to hers: machine learning.

Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science, was diagnosed with breast cancer in 2014. She soon learned that good data about the disease is hard to find. “You are desperate for information — for data,” she says now. “Should I use this drug or that? Is that treatment best? What are the odds of recurrence? Without reliable empirical evidence, your treatment choices become your own best guesses.”

Across different areas of cancer care — be it diagnosis, treatment, or prevention — the data protocol is similar. Doctors start the process by mapping patient information into structured data by hand, and then run basic statistical analyses to identify correlations. The approach is primitive compared with what is possible in computer science today, Barzilay says.

These kinds of delays and lapses (which are not limited to cancer treatment), can really hamper scientific advances, Barzilay says. For example, 1.7 million people are diagnosed with cancer in the U.S. every year, but only about 3 percent enroll in clinical trials, according to the American Society of Clinical Oncology. Current research practice relies exclusively on data drawn from this tiny fraction of patients. “We need treatment insights from the other 97 percent receiving cancer care,” she says.

To be clear: Barzilay isn’t looking to up-end the way current clinical research is conducted. She just believes that doctors and biologists — and patients — could benefit if she and other data scientists lent them a helping hand. Innovation is needed and the tools are there to be used.

Barzilay has struck up new research collaborations, drawn in MIT students, launched projects with doctors at Massachusetts General Hospital, and begun empowering cancer treatment with the machine learning insight that has already transformed so many areas of modern life.

Machine learning, real people

At the MIT Stata Center, Barzilay, a lively presence, interrupts herself mid-sentence, leaps up from her office couch, and runs off to check on her students.

She returns with a laugh. An undergraduate group is assisting Barzilay with a federal grant application, and they’re down to the wire on the submission deadline. The funds, she says, would enable her to pay the students for their time. Like Barzilay, they are doing much of this research for free, because they believe in its power to do good. “In all my years at MIT I have never seen students get so excited about the research and volunteer so much of their time,” Barzilay says.

Marshall Scholar will pursue research on algorithms

The senior has more than accomplished the former goal, conducting innovative research on algorithms to reduce network congestion, in the Networks and Mobile Systems group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). And, as he graduates this spring with a bachelor’s degree in computer science and electrical engineering and a master’s in engineering, he is well on his way to achieving the latter one.

But Zuo has also taken some productive detours from that roadmap, including minoring in creative writing and helping to launch MakeMIT, the nation’s largest “hardware hackathon.”

The next step in his journey will take him to Cambridge University, where he will continue his computer science research as a Marshall Scholar.

“The Marshall affords me the opportunity to keep exploring for a couple more years on an academic level, and to grow on a personal level, too,” Zuo says. While studying in the Advanced Computer Science program at the university’s Computer Laboratory, “I’ll be able to work with networks and systems to deepen my understanding and take more time to explore this field,” he says.

Algorithms to connect the world

Zuo fell in love with algorithms his first year at MIT. “It was exactly what I was looking for,” he says with a smile. “I took every algorithms course there was on offer.”

His first research experience, the summer after his freshman year, was in the lab of Professor Manolis Kellis, head of the Computational Biology group at CSAIL. Zuo worked with a postdoc in Kellis’ group to use algorithms to identify related clusters of genes in a single cell type within a specific tissue. “We ended up coming up with a pretty cool algorithm,” he says.

As a research assistant for TIBCO Career Development Assistant Professor Mohammad Alizadeh, Zuo is now working on cutting-edge algorithms for congestion control in networks, with a focus on “lossless” data networks.

A ubiquitous model of decision processes more accurate

Markov decision processes are mathematical models used to determine the best courses of action when both current circumstances and future consequences are uncertain. They’ve had a huge range of applications — in natural-resource management, manufacturing, operations management, robot control, finance, epidemiology, scientific-experiment design, and tennis strategy, just to name a few.

But analyses involving Markov decision processes (MDPs) usually make some simplifying assumptions. In an MDP, a given decision doesn’t always yield a predictable result; it could yield a range of possible results. And each of those results has a different “value,” meaning the chance that it will lead, ultimately, to a desirable outcome.

Characterizing the value of given decision requires collection of empirical data, which can be prohibitively time consuming, so analysts usually just make educated guesses. That means, however, that the MDP analysis doesn’t guarantee the best decision in all cases.

In the Proceedings of the Conference on Neural Information Processing Systems, published last month, researchers from MIT and Duke University took a step toward putting MDP analysis on more secure footing. They show that, by adopting a simple trick long known in statistics but little applied in machine learning, it’s possible to accurately characterize the value of a given decision while collecting much less empirical data than had previously seemed necessary.

In their paper, the researchers described a simple example in which the standard approach to characterizing probabilities would require the same decision to be performed almost 4 million times in order to yield a reliable value estimate.

With the researchers’ approach, it would need to be run 167,000 times. That’s still a big number — except, perhaps, in the context of a server farm processing millions of web clicks per second, where MDP analysis could help allocate computational resources. In other contexts, the work at least represents a big step in the right direction.

“People are not going to start using something that is so sample-intensive right now,” says Jason Pazis, a postdoc at the MIT Laboratory for Information and Decision Systems and first author on the new paper. “We’ve shown one way to bring the sample complexity down. And hopefully, it’s orthogonal to many other ways, so we can combine them.”