Take 10 with... Brendon Brewer

Dr Brendon Brewer from the Department of Statistics gives us 10 minutes of his time to discuss how he develops statistical methods to evaluate scientific theories.

Dr Brendon Brewer, Senior Lecturer in the Department of Statistics
Dr Brendon Brewer, Senior Lecturer in the Department of Statistics

1. Describe your research topic to us in 10 words or less.

I calculate the probability of all your favourite theories.

2. Now explain it in everyday terms!

Scientists collect data that contains the answers they need, but the information isn’t always presented in a direct or obvious way – it has to be extracted from the raw details. I work with a framework called Bayesian inference, which helps us work out what conclusions our data supports and how much uncertainty remains. It does this using probability theory – calculating how plausible certain theories are, given the data available. I apply it to various problems, mostly in astronomy, which is my PhD field, but also in other areas including biology, geophysics and sports problems.

3. Describe some of your day-to-day research activities.

I spend a decent amount of time programming (almost always in C++, Python, or R, for those who are wondering). At other times, I’m reading or writing papers, checking graduate students’ work and collaborating with distant colleagues regularly in online meetings.

4. What do you enjoy most about your research?

One of the best parts of research is being able to follow your interests and strengths. Years ago, before moving to Auckland, I became curious about how cricketers 'get their eye in' when they start batting. That idea eventually grew into a PhD project for my student Oliver Stevenson (now Senior Manager of Data Science at Luma Analytics), who developed my initial idea into a full model of how batting ability changes during an innings and across a player’s career.

5. Tell us something that has surprised you in the course of your research.

A while back, I was working on a computational algorithm that I thought was pretty good (and still use today). I visited a colleague in New York who, along with his student, had developed a different algorithm for the same kinds of problems. He suggested we both test our methods on a challenging example he’d picked.

About two hours later, he came back to me and said, “we’re hosed!” He’d happened to choose a problem that his own algorithm couldn’t handle – but mine could. This is one of my favourite research memories. We later turned the challenge problem into a paper we wrote together, which is about finding all the stars in a noisy image.

6. How have you approached any challenges you’ve faced in your research?

A few years ago, I became obsessed with a particular problem to do with the Nested Sampling algorithm, which is a strong interest of mine. I wanted to be able to apply it to a broader range of problems than it can currently address. I nicknamed this project the ‘Kevin algorithm’ after my colleague Kevin Knuth, who I had discussed it with.

I had about ten separate ideas for how to do it, and about four criteria I wanted my new, more general method to satisfy. All of my ideas satisfied two or three criteria, but none satisfied all four. After some time working on this, I decided I needed to focus a bit less on it, and a bit more on projects where there was a clearer path to success.

7. What questions have emerged as a result?

For me, the big question about the Kevin algorithm is whether there is some fundamental reason it is not possible, or whether I just haven’t figured it out yet. I suspect it is possible, and I plan to engage with it again in the future, perhaps bringing in some extra brainpower from colleagues.

8. What kind of impact do you hope your research will have?

Realistically, I would like to have a positive influence on the way people perform their statistical analyses. Sometimes, approaching data analysis in a principled way gets you more bang for your buck in terms of what you can extract from the data. This is particularly important in areas where data is very expensive to obtain.

In astronomy, for example, telescopes are oversubscribed and many researchers do not get as much telescope time as they would like. The right statistical analysis can sometimes change a certain scientific project from impossible to possible, and I am pleased when I can enable that.

9. If you collaborate across the faculty or University, or even outside the University, who do you work with and how does it benefit your research?

Most of my collaborators are either graduate students in statistics, or astronomers at other universities. However, I was once approached by Associate Professor Michael Rowe from the School of Environment. He had a geophysics data analysis problem that was common in his area and wanted to consult a statistician about it. I don’t know why he chose to knock on my door in particular, but it just so happened that the problem closely resembled ones I had already worked on, so solving it was relatively straightforward. We worked together to tweak my existing code to meet his needs, and wrote a paper about it. I ran into him while teaching last year and was pleased to learn he still uses the little program I wrote back then.

10.  What one piece of advice would you give your younger, less experienced research self?

Throughout my career, I have made a note of random research ideas as they occur to me. However, in the past, I didn’t check the list as often as I could have. Looking back at old lists from many years ago, my background knowledge has shifted, making some of the ideas harder now than they would have been back when I originally thought of them. I would advise my younger self to revisit these lists more frequently, as I do now.