Personal Examples of the Research Cycle and the Research Iceberg Analogy

Tadayoshi Kohno (Yoshi Kohno)
6 min readApr 7, 2022

This is a “sidebar” to my “Unseen PhD Effort, ‘Failures,’ and the Research Iceberg Analogy” post; I’ll refer to the other post as my main post.

In my main post, I include the figure below and I remark that research is an iterative process with many “failures” along the way. In that post, I also provide thoughts on how to interpret such “failures” (short summary: I don’t like the word “failures”).

The Research Process and the Iceberg. Above water: The published work. Below water: The unseen effort. The unseen effort consists of an iterative cycle of: (1) identifying potential projects, (2) selecting which problems to work on, (3) working on those problems, (4) refining the work for publication, and (5) dealing with setbacks along the way.
The Research Process and the Iceberg. Photo of iceberg copyright Andreas Weith under CC BY-SA 4.0; obtained from Wikipedia. Changes include cropping, adding text, and adding a spiral.

Regarding the above figure and the Research Iceberg Analogy, in my main post, I wrote:

[T]he research process (generally) consists of (1) identifying potential projects, (2) selecting which problems to work on, (3) working on those problems, (4) refining the work for publication, and (5) dealing with setbacks along the way. Eventually, one enters the phase of (6) publishing the results.

The cyclical pattern in the above diagram indicates the repetitive cycle of (1)-(2)-(3)-(4)-(5)-(1)-(2)-(3)-(4)-(5)-(1)-(2)-(3)-… that happens before the work is finally published (6). The mass of the cycles below the water surface represents the significant amount of effort that goes into a research project but that is not visible in the resulting publication.

The rest of my main post provides a more philosophical discussion of the “Research Iceberg Analogy.” In this post, I provide some more context as well as personal examples of the above research cycle.

Elaborating on the diagram.

I use the phrase “repetitive cycle” above because almost all research involves having initial ideas ((1) and (2)), making progress ((3) and (4)), and then discovering that the team selected a project or direction that was less interesting, actionable, or fruitful than they initially anticipated. This causes the team to return to an earlier step. Sometimes the team might return to the step of identifying (1) or selecting (2) problems; sometimes the team might return to exploring other ways to solve the problem (3).

Sometimes the team returns to earlier steps because the work was rejected for publication and the team needs to revisit some element of the research or the writing.

I put “refining work for publication” (4) in the repetitive cycle because the process of writing up research results often provides greater insights into the research and the underlying research questions. In fact, one could consider “refining work for publication” (4) and “dealing with setbacks” (5) as parts of “working on those problems” (3).

Returning to an earlier step is not a “failure.” Rather, learning that one direction isn’t working out and returning to an earlier step in the research cycle is still progress. In my figure, the spiral continues to advance upward. I discuss “failure” more in my main post on the Research Iceberg Analogy.

Personal examples.

My currently most cited publication is my 2010 paper on automotive security. In 2020, it received the “Test of Time Award” from the IEEE Symposium on Security and Privacy (a top peer-reviewed publication venue in my field, computer security). In 2021, our automotive effort (spanning our 2010 paper and our 2011 paper) received the American Association for the Advancement of Science (AAAS) “Golden Goose Award.”

This effort was a collaboration between the University of Washington (where I’m at) and UC San Diego; this is our project page.

Image of Lesley Stahl and Kathleen Fisher driving our research vehicle for 60 Minutes (image from CBS News). In this image, Lesley Stahl attempted to stop the car by pressing the brake pedal before hitting the cones.
Image of Lesley Stahl and Kathleen Fisher driving our research vehicle for 60 Minutes (image from CBS News). Our research team discovered security vulnerabilities that would allow remote parties to disable the car’s brakes. In this image, Lesley Stahl attempted to stop the car by pressing the brake pedal before hitting the cones.

Referring to the iceberg figure, the below-surface portion of our 2010 automotive security paper is huge. I recall being so excited to submit our work (an earlier version of this paper) to a peer-reviewed conference in or around 2009. It was rejected. We returned to an earlier point in the research cycle. Among the things that we did: we significantly rewrote a sizeable portion of the paper. Eventually, we submitted to the 2010 IEEE Symposium on Security and Privacy. That submission was almost rejected, too. However, it was accepted! And the rest is history :) .

And, actually, I am glad that our first submission was rejected. I was not glad at the time, but I am now. The second (accepted) submission is a much better paper.

A similar thing happened to our funding requests related to our automotive security effort. We submitted a grant for automotive security to the U.S. government. It was rejected. We continued to work on the grant proposal, resubmitted, and the resubmission was funded.

Returning to a discussion of our 2010 paper: not only was our first submission of this work rejected, but we had multiple “failures” along the way (see my main post for my thoughts about this word). As background, even back in 2008 and 2009, the modern car was pervasively computerized, with computers controlling critical components like the brakes, the engine, the transmission, and so on. After obtaining two 2009-edition modern sedans, our team began to employ reverse engineering methodscode analysis — to try to understand how our cars’ computers worked. Reverse engineering all the components was more difficult than we imagined. So we rewound our research process and developed a different approach: the fuzzing approach discussed on page 8 of our paper.

There were also several research objectives that we were never successful at. We never found remote code execution vulnerabilities through FM radio transmissions or TPMS (tire pressure monitoring system) radio transmissions, for example.

I still regularly encounter challenges in research. I currently have three projects that I am extremely excited about but that have each received at least one rejection from a peer-reviewed venue. These rejections are disheartening. Still, I believe in these projects. These rejections have become opportunities for our teams to revisit our works and make our works (our icebergs) stronger. I hope to share these projects with the world soon! I am also in the middle of another project that has pivoted multiple times. Each time it pivots — each time it returns to an earlier phase in the research cycle — I know that our research is getting stronger. We have yet to submit any version of this project for publication, and there is still a chance that we will pivot directions yet again. Despite the numerous pivots, I believe in this project and our overall vision; I look forward to sharing this project with the world soon!

As a PhD student, I spent significant time working on multiple projects that ended up never becoming publications. I still do. Despite the lack of resulting publications, I am glad to have worked on these projects and learned a lot while doing so.

I clearly remember some of the projects I “failed” at as a PhD student. (As noted above, I discuss the word “failure” more in my main post.) One summer, Fabian Mornose and Avi Rubin (two professors that I worked with) suggested the following project: research whether it is possible to determine what someone is typing by listening to the sounds of their keystrokes. I thought this was a brilliant idea and tried my best to find an answer to the research problem. I was never successful, but not because the problem was unsolvable. Later, another research team published a solution to the same problem.

Despite not being successful, I am grateful to have worked on that project. The idea — of figuring out what someone is typing by listening to the sounds of keystrokes — remains one of my favorite ideas in the field of computer security. Every time I teach CSE 484 (undergraduate computer security at UW), I share this example adversarial capability with my students. I still wish that I was successful with that research project, and I still feel disappointed that I wasn’t able to implement Avi’s and Fabian’s vision. But I learned a lot through that project and I’m sure Avi and Fabian understand. In fact, I am grateful to have had that “failure” as a PhD student because it helps me reflect on how I advise my own students.

In short, I am glad to have spent that time working on that project.

I offer the above in the hopes of providing concrete examples to researchers who are in the process of working on their first research projects and who have yet to experience the full repetitive research cycle themselves. I suspect that all experienced researchers in the field have their own examples to share.

Acknowledgements.

Thank you to all the students and postdocs that I have advised, past and present! Thank you to all my collaborators on the projects mentioned in this post and all my other projects as well. Thank you also to Kaiming Cheng, Camille Cobb, Ivan Evtimov, Earlence Fernandes, Umar Iqbal, Lucy Simko, and Eric Zeng for comments on my writings about icebergs.

--

--

Tadayoshi Kohno (Yoshi Kohno)

Tadayoshi (Yoshi) Kohno is a professor in the UW Paul G. Allen School of Computer Science & Engineering. His homepage: https://homes.cs.washington.edu/~yoshi/.