Category Archives: Science

Thoughts on various aspects of science, from specific concepts and ideas to nuts and bolts.

Ten days of code

Open code in academic computational science should be the standard. After all, the scientific ideal is to share information toward progress. This is at least an idealistic view of why we publish so competitively, with standards that demand we share our findings with our peers and with the public. And while the computational methods of my most recent joint experimental/computational paper far exceeded the length of our experimental methods explanation, the rather large step of implementation of our model into a numerical scheme is non-trivial (at least I’d like to think so).

Continue reading

Advertisements

Neuroethics

I often like to say that, in my line of work in computational neuroscience, they don’t let me deal directly with other humans. This is jokingly said to reflect the fact that I have no interpersonal skills or rather to suggest that the results from my mathematical models are narrow in scope and likely not to have direct implications on human health, for a variety of reasons. One reason for this is because the data I use to constrain my models comes primarily from experiments from animals and are often used in a qualitative fashion. This is because I am often more interested in general dynamical principles of networks that may exist in the brain, since the real problem of full understanding is quite a bit less tractable (but not necessarily impossible). Another reason is that all computational models — from neuroscience to genetics to economics — have numerous simplifying assumptions that must be understood carefully in order to interpret the model results appropriately. There is a great deal of responsibility that accompanies the communication of model results, since these explicit and implicit assumptions must be carefully specified.

Yet, despite these reasons for not marketing my work in a way that suggests it has direct implications for human health, I recognize that the study of brains and nervous systems has wide ranging implications on so many issues that will impact human health fundamentally. Perhaps even moreso than genetics, brain science is widely believed to be addressing many secrets of who we are as humans and as individuals. While both are complex molecular machines, the implications of altering brains seems to have an immediacy that may be unique. I believe that neuroscientists have a professional responsibility to understand how the greater public, policy makers, health care providers, businesses, and the law will use and interpret our findings and to help ensure that these societal decisions are supported by science. Since we admittedly know so little (as a field, we are full of factoids), we currently need to be clear when there is not enough evidence to support these decisions.

Among the issues, some of which are unique to neuroscience, include cognitive enhancement, incidental findings of maladies that might come up in a subject who is involved as a research participant, implications on neuroscience-based evidence in court, predispositions for a profile of a person’s brain toward certain decisions or actions, rights and access of a patient’s brain-related health care records, and a host of other privacy issues that may speak to core personality issues that may be exploited one day for marketing. This is not at all an exhaustive list.

There is an emerging field of neuroethics that helps to address many of these issues, with organizations that are attempting to establish guidelines that help researchers and the public make good decisions. In fact, there exists a Presidential Commission For The Study Of Bioethical Issues that in part focuses specifically on neuroscience-specific issues. They just finished up a meeting that facilitated discussion of several of these issues. My hope is that all of us as professionals take this responsibility seriously and that we engage our friends and the public in discussing these important issues openly in order to educate each other.

Numerical solver design

Numerical solver design is a topic that I know next to nothing about. I always say I know a tiny bit of MATLAB very well, but lately in my push to become MATLAB-independent, I’ve been trying to remove bits of proprietary code from my own simulations. (My transition from MATLAB to Octave failed in part because of memory issues in Octave. So I’m now looking at Python as my potential free software savior.)

The biggest hurdle has been my usage of the suite of numerical solvers for ordinary differential equations (ODEs) in MATLAB. These are fast, vectorizable to some extent, and take advantage of some parallel processing on multicore machines. However, using their solvers presents a number of problems when one wants to understand the ins and outs of their algorithms. It’s not that this information isn’t there (using ‘type’ or ‘help’), but the problem with going through code that isn’t yours is an ugly one that isn’t going away soon. Additionally, it’s never been completely clear under what circumstances MATLAB’s solvers start to make use of multiple cores and why that process should be accurate, despite seeing evidence of multicore usage at some level of scale in my own code, with ode23s. And there are some things that I don’t know if MATLAB’s solvers can do well. For instance, I have MxNxT matrices of a dynamical variable connecting M model cells of one type to N model cells of another type, saved over all T time steps. Tracking all of these variables and the some ~20 other variables of varying lengths becomes a real code nightmare in vector forms without name labels, which are available in structs. Of course, there is speed in vectors that isn’t available in structs — speed that I am willing to sacrifice for sanity.

The real bottom line for me is being able to write some MATLAB code that I can easily translate into a Python environment eventually. This process will take a lot longer than I had hoped. By writing solvers now, I hope to be able to take them with me and get some insight into solver design for future projects.

Using structs in a solver. In MATLAB’s ODE solvers, they require a strict type of vectorization that has time in one dimension and all of the dynamical variables in the other dimension. I have synaptic weight matrices that are MxNxT, for M of cell A going to N of cell B, throughout time T. Because of the dimensions of these are tricky to code in MATLAB directly without a lot of transposes and other confusing situations, I was hoping to use structs to name my matrices and make it easy to write vector field equations in a clear manner without having to write confusing converters. It’s not completely out of laziness that I want to avoid doing this; rather, another conversion step introduces one more place where error can be introduced.

It turns out that using structs is not a good idea with the separate solver/vector field design, since it seems like you’d spend too much computational time translating anyway. Translating once in a difficult way but gaining speed is far better than translating thousands of times in an “easy” way, at some unknown expense of speed. I might try next a two-matrix approach: one for class A cells and one for class B cells, which does not make this a general solver but will allow for easier vectorization.

Variable passing to a function using a solver. In Matlab, one thing I had trouble learning early on was the method by which one passed variables to the vector field file via the solver. The syntax might look like:

[t, y] = solver('vf', t_range, y0, [], var1, var2, ...);

For the outputs, we have simply the t vector and the numerical solution y for the system, which is specified in the vector field function, in this case ‘vf’. The range of times and the initial condition y0 should be passed to the solver, which uses ‘vf’ to solve the system. Now if the function ‘vf’ requires inputs, which is not uncommon, in order to pass them to ‘vf’, you have to pass them through the solver first. If you want to pass ‘var1’ and ‘var2’ to ‘vf’, notice that you cannot simply specify these variables after the ‘y0’. Instead, in this example, there are empty brackets. These empty brackets are actually part of the solver’s input syntax to allow options to be passed to the SOLVER and not to the VF. The reasons for this are very clear to me, as I code my own solver and recognize that this is exactly how my solver ended up being coded. The solver needs the ability to accept options, and they need to come into the fixed location here. Any additional arguments passed through the solver can then be understood as variables to the vector field. While it seems reasonably obvious, I understand it much better after having tried to figure out this design from first principles.

Numerical ODE Solvers

In MATLAB, the suite of numerical solvers for systems of ordinary differential equations are functions that accept variables, including other functions. Since I learned to program MATLAB for my numerical solutions, I am accustomed to this system of passing a function of equations and variables to the ODE solver function. While there is something gratifying in my mind about the distinct separation and, in theory, plug-and-play ease of this system, I cannot help but think that it’s computationally more efficient to have the ODE solver integrated into the numerical application. When people say that they write their own ODE solvers, I always assumed that they meant an ODE solver that accepted a function file of equations and iterated over them, in much the same way as MATLAB. However, it’s finally becoming clear to me that this is not the case, and by ODE solvers, most mean that they are simply hard-coding a numerical algorithm or scheme for iterating their equations. The difference is subtle and may boil down to a difference in computational versus mathematical approaches. In my current project, I am interested in interacting rhythms in biophysical models of neural systems, which requires me to solve large systems of ODEs numerically.

While I make the transition from MATLAB to GNU Octave/R, I’m learning about what implementations are reasonable to translate and what new things I need to learn. Of course, I am interested in maximizing efficiency in the shift, but I have to be careful not to do so at the expense of lost computational efficiency, which happened in my first attempt at using ‘lsode’ as a drop-in solver replacement for a complicated system of ODEs. Since I have decided to ditch graphics in MATLAB in lieu of another system that has fewer four letter words associated with it, the ODE solver was my last true obstacle in the translation from MATLAB to Octave. Now that I am going to solve that problem by “writing my own” scheme, I am that much closer to being completely free and open source in my science.

Evolved to run?

In a new book called Born to Run, author Christopher McDougall explores an interesting theory that humans have evolved to be distance runners. Woven into the story of a running tribe called the raramuri of the Copper Canyons of Mexico, McDougall explores the research from human evolutionary biologists Daniel Lieberman and Dennis Bramble. The basic theory is that humans have evolved as bipedal animals with the specific capacity for endurance running that is not shared by other primates such as chimpanzees. It’s a very seductive idea to distance runners to think that we’re just doing what we so naturally do. In fact, we have evolved to do exactly this, so we’re really doing it old school! And to top it off, the McDougall book has gotten hundreds of runners interested in barefoot running, which is put forth as a far more natural way of running than in overbuilt shoes. Biomechanically this is a seductive idea as well, one that I’ve had fair success in implementing in my own running. (I’m up to around 12 miles comfortably in nothing but foot-form fitting slippers.)

Recently the find of an Ardipithecus ramidus, or Ardi, was unveiled to the public. Ardi is 4.4 million years old (Myo), far older than Lucy (an Australopithecus afarensis), who is a mere 3.3 Myo. I wonder whether or not the features of Homo sapiens (us!) that are posited to be beneficial for endurance running are found in these precursors to Homo.

For science and running geeks interested in reading some of the work done on the evolution of humans as endurance runners, see this Nature review by Lieberman and Bramble. A few other articles by the same authors exist, accessible via PubMed, and that review points to several good places to look for other relevant work.

Psychosomatic health

I can’t qualify posts like this enough; they are zeroth order thoughts that are more like armchair musings than they are even close to anything scientific.

Fairly often I come across a science news article about some study that correlates behavioral/mental health with some kind of somatic manifestation. There’s a term that many bandy around, “psychosomatic,” often used superciliously to indicate that one’s pains are somehow made up in one’s mind. However, pain is a very personal, neurological experience that makes objectivity difficult to assess. A recent headline I saw posits a link between loneliness in women and a higher incidence of breast cancer. To assume without qualification, for a moment, that there is a mechanism behind this observation, I was thinking about what that might be. The intuitive reaction is often to assume that it may be through inaction that the brain/body might not fight aggressive cancer cell growth if the loneliness is associated with less than optimal brain function (as is easy to imagine). However, what happens when we think about this in the opposite way? What if the mechanism of action is similar to programmed organism death? What if, in detecting a sub-optimal neurological state, the brain actively contributes to a condition that makes it susceptible to a parasitic toxin such as cancer cells?

This is all firmly in the domain of a gedanken experiment at the moment, but I bet there is some research that investigates several of these issues and potentially how they might work together to understand this. To extend these vagaries further, understanding may lead to effective treatment that could be as simple as “being social” or “making oneself happy.” Mechanism unknown, perhaps, these kinds of simple things may have far greater ramifications for our psychosomatic health (in the non-pejorative way).

Update: the original article will be published in a journal called Cancer Prevention Research.

Meta comments – and some rude ones

It’s difficult to find time to write with some reasonable estimation of quality on a variety of topics in which I’m interested. I’ve meant to write about the Tour de France, the confirmation process of Sonia Sotomayor, and a variety of scientific topics. One difficulty I’ve found is not being able to discuss my work in a meaningful manner on an informal website such as this. I prefer to let my formal talks and (hopefully soon) published papers to communicate interesting results or explored ideas, and I think that peer reviewed science is still the best model, despite something so seductively democratic as the arXiv model of publication, which would be a nightmare to sort through. So there’s an apparent lack of science-related stuff on here, simply for these reasons.

One thing I think I can write about, briefly, is the publication process that I’m currently learning about for the first time in my scientific career. I recently submitted a manuscript with two collaborators to a journal that specializes in computational science, and it was out for peer review about a month ago. We recently received comments back from just the first of two external reviewers, who are not affiliated with the journal directly but presumably experts in the area of research that we are in.

For this first reviewer, the respectfulness of the written comments and questions was greatly appreciated, and I proceeded to incorporate changes and answer questions from that reviewer. The second reviewer apparently did not submit comments to the authors (us), but upon inquiring about this, it turns out that the second reviewer simply did not fill out the review form correctly and the comments to the editors were allowed to be forwarded. The tone of the comments was more caustic, almost accusatory, though there were several valuable points that were made. The reviewer made a few comments that were not at all constructive, simply providing an opinion without any possible suggestions for meeting his or her otherwise arbitrary criteria. I refrain from giving examples here due to confidentiality. Finally, the reviewer was simply wrong about the guidelines of the journal itself in at least one comment that was made, which makes satisfactory revision on this point impossible. Given the tone of this reviewer’s argument, I expect a difficult task in appeasing him or her for the revised submission.