<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Matt Wall on Medium]]></title>
        <description><![CDATA[Stories by Matt Wall on Medium]]></description>
        <link>https://medium.com/@m.b.wall?source=rss-bb4f2cd47757------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 15 May 2026 17:45:05 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@m.b.wall/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[How to make super-fancy brain network plots: A tutorial with Gephi]]></title>
            <link>https://medium.com/@m.b.wall/how-to-make-super-fancy-brain-network-plots-a-tutorial-with-gephi-6583921ef349?source=rss-bb4f2cd47757------2</link>
            <guid isPermaLink="false">https://medium.com/p/6583921ef349</guid>
            <category><![CDATA[neuroimaging]]></category>
            <category><![CDATA[dataviz]]></category>
            <category><![CDATA[fmri]]></category>
            <category><![CDATA[neuroscience]]></category>
            <category><![CDATA[gephi]]></category>
            <dc:creator><![CDATA[Matt Wall]]></dc:creator>
            <pubDate>Thu, 12 Dec 2024 17:20:02 GMT</pubDate>
            <atom:updated>2024-12-12T20:12:50.247Z</atom:updated>
            <content:encoded><![CDATA[<p><a href="https://royalsocietypublishing.org/doi/10.1098/rsif.2014.0873">This paper by Petri et al. from 2014</a> contains one of the most famous images in psychedelic science:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RuFVPGb8oz1PSVm2mhyCQw.jpeg" /></figure><p>If you’ve ever seen a presentation about psychedelics and the brain, you’ve probably seen it. It’s on the inside covers of Michael Pollan’s <a href="https://michaelpollan.com/books/how-to-change-your-mind/">best-selling book</a>, and a version of it has even appeared as graffiti:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/809/1*TeMi5X6BqrXRfL2kGiGhcA.jpeg" /></figure><p>Now, that’s impact! It’s also one of the most widely misunderstood and misinterpreted findings I know of. It’s usually interpreted as showing how brain connectivity differs under placebo and psilocybin, but in fact the lines on these diagrams do not represent ‘connectivity’ in any simple sense; they are in fact ‘persistence homological scaffolds’. Despite reading this paper several times I’m happy to admit that I don’t really have a solid grasp of what exactly a ‘persistence homological scaffold’ represents, and that seems to be true for most other researchers I’ve talked to about this figure. As far as I can tell it’s a novel measure derived from network theory and something to do with the shapes of the ‘holes’ in the network (i.e. the bits between the nodes and edges). These kind of concepts tend to be something that I can grasp conceptually if I have someone patient to explain it to me and enough crayons to chew but honestly the maths is beyond me. Feel free to <a href="https://royalsocietypublishing.org/doi/10.1098/rsif.2014.0873">take a swing at it yourself </a>and correct/educate me in the comments.</p><p>It’s also been a bit mysterious (to me, at least) because I couldn’t figure out how it had been produced. I’ve recently got some of my own data that I’d like to visualise in this way, so I was suddenly more motivated to try and understand how these plots had been produced. These kinds of plots are often called ‘circle plots’ or ‘chord diagrams’, and there are lots of packages/toolboxes around that can help you create them. You can do them in <a href="https://uk.mathworks.com/matlabcentral/discussions/tips/844286-it-is-pretty-easy-to-draw-chord-chart-by-matlab">Matlab</a>, or there are <a href="https://jokergoo.github.io/circlize/">R</a> and <a href="https://python-graph-gallery.com/chord-diagram-python-chord/">Python</a> packages which can also produce some nice results. However, these all look… different. Not quite as appealingly clean and neat. Yes, I suppose I could have just asked the authors, but I’m a bit pig-headed about wanting to work these things out myself sometimes. The breakthrough came when I re-read the Petri et al. paper again and realised they had provided some <a href="https://royalsocietypublishing.org/doi/suppl/10.1098/rsif.2014.0873">supplementary materials.</a> These turned out to be .gexf files. A quick Google revealed that .gexf files are a format used by an application called <a href="https://gephi.org/">Gephi</a>; a really quite impressive (and free!) little app that I’d previously been unaware of for visualising complex network data.</p><p>(Note: Because, as noted above, I don’t understand what they are, I’m not going to be visualising ‘persistence homological scaffolds’ here, but something much simpler instead; the correlation between two nodes in a network, which is generally recognised as being a reasonable measure of how functionally connected two brain regions are. The resulting images are therefore going to <em>look</em> similar to the Petri et al. (2014) figure, but in a very important sense they are <em>not </em>similar. Anyway, back to it…)</p><p>If you open one of the supplementary .gexf files in Gephi you can see the format of the data in the ‘Data laboratory’ section. There are two tables: Nodes, and Edges. This makes some sense!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wHFE0IaEJIKALr0uaGR3VQ.png" /><figcaption>Nodes table</figcaption></figure><p>So the nodes table is a list of the brain regions, with a simple ID number in the first column and the ‘Modularity class’ is what defines the network each node belongs to (the different colours of the nodes around the edge). Fine.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2tz_yaExXejNrQ6c9ythPg.png" /><figcaption>Edges table</figcaption></figure><p>Then the ‘Edges’ obviously defines the lines/connections in the plot. The first two columns define the start and end node of each line, the third specifies that they should all be ‘undirected’ (i.e. there is no indication of the causal relationship between the source and target). Then there’s an ID column, a couple of other columns which don’t seem to be used and finally a ‘Weight’ column which is used to specify the thickness/transparency of the lines.</p><p>I’m now going to take a step back, and use some new data to do a tutorial, but using these files as a model. I want to visualise some simple connectivity data. Basically, a big old correlation matrix with a bunch of brain areas. I’ve got some data from a study where I’ve used the Schaefer et al. (2018) 200-region parcellation (<a href="https://pubmed.ncbi.nlm.nih.gov/28981612/">paper</a>, <a href="https://github.com/ThomasYeoLab/CBIG/blob/master/stable_projects/brain_parcellation/Schaefer2018_LocalGlobal/README.md">Github repo</a>). This is nothing fancy, but briefly, I extracted time-series from a single subject’s resting-state fMRI data using this atlas, then did a simple pair-wise correlation (Spearman’s) between all of them. This produces a 200x200 correlation matrix, which you can visualise quite easily in Matlab with a heatmap:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/900/1*ybrH6zmccegAYVdN4RJg8A.png" /></figure><p>So how do we go from this massive correlation matrix, to the data format that we need for Gephi? The nodes table is quite easy; fortunately the Schaefer (2018) atlas provides lists of node names sorted into the networks they belong to so it was simple to create a sheet of the same format as the Nodes sheet above (with 200 rows, obviously):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/522/1*V-pdDesSdA1HKQvS86d5NQ.png" /><figcaption>My new ‘Nodes’ sheet.</figcaption></figure><p>The ‘Edges’ sheet is a little trickier. We basically need to re-shape the correlation matrix so all the right bits of it are in one column vector, then insert other useful columns (source, target, ID, etc.). To do this I used some simple Matlab data-munging code:</p><pre>%Assumes you already have a variable/structure loaded which is your correlation<br>%matrix: corr_matrix<br><br>%Get rid of the upper triangle of the matrix since it&#39;s symmetrical:<br>lower_triangle=tril(corr_matrix);<br><br>numrois=200;<br><br>%Makes a simple index column vector (1-200) for use later:<br>index = 1:numrois;<br>index = index&#39;;<br><br>%i is going to be our row variable and j is our column variable<br>i=1;<br>j=1;<br>output=0;<br>final_output=zeros(3,0);<br><br>while i &lt; numrois<br>   <br>   output(1:numrois-i,7) = lower_triangle(i+1:numrois,j);<br><br>   %First column: &#39;Source&#39;<br>   output(1:numrois-i,1) = index(i+1:numrois,1);<br><br>   %Second column: &#39;Target&#39;<br>   output(1:numrois-i,2) = i;<br><br>   i=i+1;<br>   j=j+1;<br><br>   final_output = [final_output;output];<br>   clear output<br>end<br><br>%Write output file<br>OutputFile=fullfile(&#39;reshaped_for_Gephi.csv&#39;);<br>writematrix(final_output, OutputFile);</pre><p>The important bit here is the while loop. This iterates over each column of the correlation matrix and copies the values into a new output variable. The first time around the loop it copies the values from column 1, row 2:200, then it moves to the next column and copies rows 3:200, then the third column and rows 4:200, and so on. In this way we copy the values into a simple column vector in a systematic way. We’re only copying the values from the lower-left triangle of the matrix here, and also omitting the diagonal since that all has values of 1. Then we use the ‘index’ variable to create the ‘Source’ and ‘Target’ columns. To match the Petri et al. (2014) files I’m putting the correlation values in the seventh column of the output. The output is a 19900x7 matrix which looks like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/537/1*c2w8hVuWLh0cHWmeyBNDzA.png" /></figure><p>Getting there, but not exactly what we need. The first two columns, and the seventh are right, but the rest need some work. I’m sure I could have coded it better, but I am a despairingly shitty coder and also quite impatient and lazy, so I just manually edited this file in Excel to put in the headers, and the other columns that I needed. Yes, I know this is crashingly inelegant, I’m sure you could do a better job if you have fancier coding skills. I also still have no idea what the two blank columns are for (Label and Interval) but have preserved them just because I’m working off the example from the Petri et al. (2014) paper. *shrug*</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/530/1*WFZMUMyQuE_1cXn2cAhHhg.png" /></figure><p>Now then — back to Gephi. If we open a new project we can load our new sheets using the ‘Import Spreadsheet’ button in the Data Laboratory section:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5Ez9ESNJrjum2mclX0Qpqw.png" /><figcaption>New nodes table</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uMKGyF8RuHHHUuRDjoVp2A.png" /><figcaption>New edges table</figcaption></figure><p>Then we can finally switch to the ‘Overview’ section of Gephi and visualise our network! And you’ll find it looks… well… shitty.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*g_0xgjT4zhfFsCSwQywXUA.png" /><figcaption>A big black mess.</figcaption></figure><p>This is because we need to do some more tweaking. First we can filter/threshold our edges based on their weights. We do this by dragging an ‘Edge Weight’ filter down into the ‘queries’ section on the right-hand side and then moving the slider to the value we want. I’ve gone for 0.5 here:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yy6_fGbJKE_dbSW0bEgCeg.png" /></figure><p>Looks a little more network-y but still not great. Next we need to choose a layout. There are lots of layouts to choose from in Gephi, most of which I don’t really understand, but I want a circle, which means I had to download a plugin (Tools menu &gt; Plugins) called ‘Circular Layout’. Once you’ve done that and selected it in the left-hand Layout panel, you get this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KGzF8w1tdf6oHu77R6zI9A.png" /></figure><p>(Other layouts are fun to play with too of course — go nuts!).</p><p>Now we want some colours. Let’s colour the nodes by their network by using the top left panel and clicking the ‘Apply’ button:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DYahltCBHztCvQEJ-syqBQ.png" /></figure><p>This colours the edges too, with a colour that’s a combination of the colours of the two nodes that they connect. Neat. Next problem is that the nodes are in a somewhat random order around the edge of the circle, but we can sort them by fiddling with the settings of the Circular Layout (sort by network, bottom-left):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PgVg7cTai7au7Jtndh810Q.png" /></figure><p>Now we want to scale the size of the nodes so that the size reflects their connectivity in some way. Gephi can run lots of statistics on networks using the statistics tab on the right-hand side. Head over there and click the ‘run’ button on some of them:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*30Vzmxl5RYtwBqDJ6Wqx0A.png" /></figure><p>I’ve used ‘Avg. Clustering Coefficient’ here, but you may wish to use a different one. Then you can select one of those statistics you’ve just calculated on the left panel (green highlight above) to scale the size of your nodes.</p><p>We’re nearly there! If you then switch to the ‘Preview’ tab at the top you can see something that bears more than a passing resemblance to the original Petri et al. (2014) figure:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*E4hIfixbe3oAUlspPD3XuA.png" /></figure><p>I fiddled with the settings in the left panel here too, scaled the weight of the edges, reduced the opacity etc. The Preview tab is also where you can export your masterpiece to some different image formats.</p><p>Here’s another one I created using this exact process with some further optional tweaks (changed the colours to get rid of the muddy browns, manually dragged some nodes in/out of the circle to make the within-network connections clearer):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/586/1*onEetQC8M5PabnHzPWOX7Q.png" /></figure><p>So… That’s pretty much it. Hopefully this will be helpful to someone. This is only just scratching the surface of the kinds of visualisations and analysis you can do with Gephi of course, and there are loads of other tutorials and resources around about Gephi — check some out if you’re interested. Also, please do let me know if anything’s unclear and I’ll try and clarify/correct this page. Happy visualising!</p><p>TTFN.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6583921ef349" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Psychedelic science must redouble its efforts to do rigorous, objective research]]></title>
            <link>https://medium.com/@m.b.wall/psychedelic-science-must-redouble-its-efforts-to-do-rigorous-objective-research-d25d143efe5c?source=rss-bb4f2cd47757------2</link>
            <guid isPermaLink="false">https://medium.com/p/d25d143efe5c</guid>
            <category><![CDATA[psychedelic-science]]></category>
            <category><![CDATA[psychedelics]]></category>
            <category><![CDATA[psychedelic-therapy]]></category>
            <dc:creator><![CDATA[Matt Wall]]></dc:creator>
            <pubDate>Sun, 05 May 2024 10:09:20 GMT</pubDate>
            <atom:updated>2024-05-05T10:09:20.913Z</atom:updated>
            <content:encoded><![CDATA[<p>There was a new meta-analysis on psychedelic therapy for depression <a href="https://www.bmj.com/content/385/bmj-2023-078084">published in the British Medical Journal</a> last week, which was immediately (and rightly) torn apart by commentators on twitter. Problems with it seem to include mistaking standard errors for standard deviations, counting different end-points from the same study as separate studies, and others. The issues are summed up well by this thread:</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//twitter.com/IoanaA_Cristea/status/1786689167964930480&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/7bb7a9132f96b9961bd21d8a2a27e51d/href">https://medium.com/media/7bb7a9132f96b9961bd21d8a2a27e51d/href</a></iframe><p>This led the <a href="https://www.bmj.com/content/385/bmj.q1025">BMJ to publish an expression of concern</a> about the paper, and I look forward to seeing the results of the investigation mentioned there. In the aftermath of this debacle I saw this tweet from Nick Brown, who I have a huge amount of respect for as a scourge of shoddy methods, uncoverer of dodgy data, and general all-round science hero:</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//twitter.com/sTeamTraen/status/1787034712080359879&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/060a592e4d4cf9ef8aacf79229db0433/href">https://medium.com/media/060a592e4d4cf9ef8aacf79229db0433/href</a></iframe><p>Is he right? I very much hope not; I am of the opinion that most work in this area is pretty robust (within the constraints that it operates), and this recent BMJ paper is an outlier, but we have to acknowledge the critique. Psychedelic research currently has a lot of momentum, and the push towards getting psychedelic therapy into the mainstream is currently going pretty well — perhaps better than many of us thought it would ten years ago. However, we still stand at a crucial inflection point, with intense resistance in some areas and massive legal, scientific, and operational challenges to overcome. Many of the key issues in psychedelic science are laid out very clearly in Michael van Elk and Eiko Fried’s <a href="https://journals.sagepub.com/doi/10.1177/20451253231198466">recent review paper.</a></p><p>With the current wave of enthusiasm for psychedelic science and the huge expansion in research activity in this area it’s perhaps inevitable that some dodgy papers with sub-standard methods will get published, but for those of us who have a strong commitment to the principles of high-quality science and also believe in the potential of this new class of therapies, there is a strong challenge. Put simply — we must do better. Get better data, do better research, put our personal beliefs aside and do truly robust and rigorous work, presented in an absolutely objective manner. We must make every effort to forestall possible critiques and take great care in our experimental designs and statistical methods. I know many outstanding scientists who are working in this area, who understand the issues well, and do very high-quality research. I therefore believe we are meeting, and will continue to meet, this challenge, but it will take continuing hard work, diligence, and a strong commitment in the community to the absolute highest-quality methods. Only our ongoing dedication to this approach will overcome the legal and scientific obstacles, and help us achieve the goals for these therapies that we feel they, and the patients, deserve.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d25d143efe5c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Academic publishers: The original enshittificationists]]></title>
            <link>https://medium.com/@m.b.wall/academic-publishers-the-original-enshittificationists-62f8b1f9544c?source=rss-bb4f2cd47757------2</link>
            <guid isPermaLink="false">https://medium.com/p/62f8b1f9544c</guid>
            <dc:creator><![CDATA[Matt Wall]]></dc:creator>
            <pubDate>Sun, 14 Apr 2024 10:49:08 GMT</pubDate>
            <atom:updated>2024-04-14T10:49:08.054Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*KSIgkHfV3CZ29Hm0.jpg" /><figcaption>Profit margins at academic publishers are *insane*.</figcaption></figure><p>Cory Doctorow is a polymathic presence on the internet; as a novelist, journalist, tech-evangelist, and general all-round liberal good egg, his contribution to modern internet culture has been enormous. Perhaps his most important recent work though, has been developing and promoting the concept of ‘<a href="https://en.wikipedia.org/wiki/Enshittification">enshittification</a>’. This is a life-cycle process whereby massive online platforms (Google, Amazon, Facebook, etc.) find initial success, exploit that success for profit, and then eventually decay and die. In Cory’s own words:</p><blockquote>Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a “two sided market”, where a platform sits between buyers and sellers, hold each hostage to the other, raking off an ever-larger share of the value that passes between them.</blockquote><p>(Cory also crossed over into my rough sphere of influence/expertise when he did an excellent episode of the Drug Science podcast, with David Nutt — <a href="https://www.drugscience.org.uk/podcast/53-moral-panic-with-cory-doctorow/">check it out</a>.)</p><p>This concept of enshittification is <a href="https://doctorow.medium.com/googles-enshittification-memos-2d6d57306072">why Google search sucks now</a>, why your Facebook feed is full of bullshit ads, why there are so many weird porn bots on Twitter, and so on. Once users are locked in, platforms are free to degrade their product, in order to extract more revenue from their advertising customers.</p><p>It occurred to me that academic publishers also fit this model quite well, and in fact have been playing the enshittification game for decades. A lot has been written about the evils of academic publishing, but for those readers who may not be aware the business model goes like this: Researchers (mostly funded by public money from government grants) do scientific research. The researchers then give the results of their work to academic journals for free. Other researchers then work (also, for free) to do peer reviews of the papers, which the publishers print in journals and sell back to universities and other organisations, at a massive profit. In fact in recent years, with the advent of ‘open-access’ journals, the researchers often pay (eye-wateringly high fees) to the journal to publish their research as well. It’s wittily summed-up in this video by the peerless <a href="https://twitter.com/DGlaucomflecken">@DrGlaucomFlecken</a>:</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//twitter.com/DGlaucomflecken/status/1778485431228678186&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/0ba18a348b7ce83d52822bec45f59368/href">https://medium.com/media/0ba18a348b7ce83d52822bec45f59368/href</a></iframe><p>So, public money ends up being spent up to three times (to do the research, to pay the journals to publish it, and then to buy the publications back from the journals again), and the publishers sit in the middle, and make massive profits; up to 40%. Some have argued that academic publishing is <a href="https://www.newscientist.com/article/mg24032052-900-time-to-break-academic-publishings-stranglehold-on-research/">the most profitable business in the world.</a> The history of how this came to be is quite fascinating, and there’s <a href="https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science">a really good long-read on the Guardian website about it all here</a>, but essentially the modern industry was created by Elsevier in the 1970s.</p><p>In the pre-internet age, publishers obviously provided a valuable service in disseminating information. The only way scholars and researchers could stay up-to-date with developments in the field was to read actual paper journals that were delivered every month to their local academic library. However, once basically anyone could publish anything they wanted for a global audience on the internet, their actual basic utility or value is now pretty low. The business model persists because academics want to publish in high-impact journals and it turns out they will pay <a href="https://www.the-scientist.com/for-a-hefty-fee-nature-journals-offer-open-access-publishing-68181">exorbitantly high fees to continue doing so.</a></p><p>So, I think the enshittification model works quite well here. First publishers were good to their users (the researchers). “We’ll publish your paper for free and all your peers and rivals can read it and be awed at your brilliance!” Sounds great. Then they exploit their users — in this case it’s for free labour in performing peer-review of other’s papers and in charging high open-access fees. Then they exploit their business customers (the universities and institutions that buy subscriptions to the journals) by massively hiking up prices to levels that <a href="https://www.theguardian.com/science/2012/apr/24/harvard-university-journal-publishers-prices">the richest universities in the world say are no longer affordable.</a></p><p>There is resistance of course, from the academic world. <a href="https://www.nature.com/articles/d41586-023-01391-5">The recent mass-resignation of the editorial staff from the Elsevier journal Neuroimage</a> in protest at the high open-access fees is a good recent example. The rise of free-to-publish pre-print servers like the physics-focused <a href="https://arxiv.org/">arXiv</a> have, to some extent, <a href="https://www.scientificamerican.com/article/arxiv-org-reaches-a-milestone-and-a-reckoning/">replaced traditional publishing models in some fields.</a></p><p>Here’s the thing though, the final part of Doctorow’s enshittification process as applied to online platforms is “then they die”. There currently seems to be little evidence of that. The outright profiteering and blatant exploitation of researchers has arguably shifted up a gear in the last couple of decades with the advent of open-access fees, but this process has been going on <em>since the 1970s</em>; academic publishers are the original enshittificationists (enshittifiers?). They have kept on re-inventing ways of enshittifying the process of disseminating scholarly information and maintaining their grossly-inflated profit margins, most notably by co-opting the open-access movement and using it as an excuse to charge ridiculous publication fees. They are a leech on the body of scholarly work, slowly sucking out the life-blood, but just never quite enough that researchers and institutions abandon them wholesale. My feeling is that they will continue doing so for many decades into the future.</p><p>I don’t think this is any particularly original set of insights. I also don’t think they’re particularly useful. Academic publishing is basically exploitative and evil; all researchers know this. I do think there is something of a question mark over the end stage of Doctorow’s enshittification cycle (“then they die”) though. Whether the modern industry titans (Google, Meta, Amazon) will ever actually die in any meaningful sense or not is still an open question. They may do, or they may (like academic publishing) re-invent themselves and find novel ways of enshittifying the internet, and all our lives, for decades to come.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=62f8b1f9544c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Stop using crazy multiband sequences for fMRI — you’re doing it wrong.]]></title>
            <link>https://medium.com/@m.b.wall/stop-using-crazy-multiband-sequences-for-fmri-youre-doing-it-wrong-2289b1a5a7b1?source=rss-bb4f2cd47757------2</link>
            <guid isPermaLink="false">https://medium.com/p/2289b1a5a7b1</guid>
            <category><![CDATA[fmri]]></category>
            <dc:creator><![CDATA[Matt Wall]]></dc:creator>
            <pubDate>Thu, 01 Jun 2023 16:01:16 GMT</pubDate>
            <atom:updated>2023-06-01T16:27:37.138Z</atom:updated>
            <content:encoded><![CDATA[<h3>Stop using crazy multiband sequences for fMRI — you’re killing your experiments.</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*r63wzPViN8mX0FhD" /><figcaption>Just because you <strong>can</strong><em> get high-resolution images with lots of slices and a &lt;1s TR with multiband doesn’t necessarily mean that you </em><strong><em>should</em></strong><em>.</em></figcaption></figure><p>WARNING: This is going to be a bit of a rant, but also a very, <em>very</em> niche one. These are wildly abstruse issues likely only to be of interest (or frankly, understandable) to those who spend a fair proportion of their life thinking about fMRI methods. Hopefully it’ll be useful to someone though…</p><p>This rant has been prompted by me seeing a couple of sets of data from different scanners recently. Both were using acquisition sequences based on the well-known Human Connectome Project data, both were collected on 3T, high-spec, modern scanners, and the other thing they had in common was that both were garbage. Total dreck. Absolute dogshit. Signal homogeneity was awful, tSNR was hideous, running a simple ICA on them revealed a load of horrible noise components and not much else useful.</p><p>What’s going on here? <a href="https://www.sciencedirect.com/science/article/abs/pii/S1053811913005338?via%3Dihub">The HCP sequences</a> are ‘good’ sequences, in many ways, but they really pushed the envelope in terms of the spatial and temporal resolution, specifically they had a TR of 0.7s and used 2x2x2 mm voxels. They achieved this by using fairly high levels of multiband acceleration: 8x. They did extensive testing of these sequences and found that for them, on their particular scanner, this worked well for their use-case.</p><p>This is the point though: THAT DOES NOT WORK WELL FOR EVERYONE. The HCP also collected a solid hour of resting-state data from each person, and scanned hundreds of people — in other words, they had huge experimental power. They had power and signal to <em>burn.</em></p><p>The biggest problem with the HCP approach (for me, opinions may differ!) is the small voxel size. Signal-to-noise with BOLD scales with voxel size or, more accurately, with voxel volume. So going from a more standard 3x3x3mm (or 27mm³) to 2x2x2mm (8mm³) represents a more than three-fold drop in SNR. Now, small voxels are really nice if you’re trying to image small regions, or for other specialised uses, but if you’re doing a ‘standard’ fMRI study (25–30 subjects, whole-brain acquisition, standard analysis techniques) with, let’s face it, questionable levels of experimental power, why would you use small voxels? You’re going to stick a whacking great 6–10mm smoothing kernel over all the data and blur it all to hell when you do the pre-processing anyway! Why do you care about high-resolution? Why would you want to kill your SNR to that extent? As mentioned before, in the HCP they had a solid hour of data and hundreds of subjects — they had power and signal coming out of their eyes. Most people are doing 6–10mins for their resting-state scans in 20-odd subjects. For the love of God, just use 3x3x3mm voxels and save your SNR.</p><p>The other issue is the high level of multiband acceleration. Short TRs are not inherently beneficial. With BOLD imaging you’re sampling the HRF, which is a slow signal that evolves over the course of ~10s or so. It makes little difference whether you sample it every 0.5s or every 2s. Yes, there are some statistical benefits that you may get with higher sampling rates, but they’re pretty modest, and more than eaten up by the SNR issues that you get with higher-multiband levels.</p><p><a href="https://www.sciencedirect.com/science/article/abs/pii/S1053811918304099?via%3Dihub">In this paper </a>we compared tSNR with different levels of multiband acceleration (on two different scanners):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7sFl4cv3j3bN3cac2PDXMg.png" /><figcaption>tSNR measures for different levels of multiband acceleration.</figcaption></figure><p>As should be clear, above MB3 you start to get severe dropouts in tSNR, particularly in the middle of the brain. <a href="https://www.sciencedirect.com/science/article/pii/S1053811921008909">As this paper also pointed out</a>, multiband fMRI sequences can badly compromise your ability to see effects in these regions. Multiband is really powerful, but at the end of the day, it’s still an under-sampling technique, and if you push it too far, you’re going to lose stuff and gain image artefacts. There’s no free lunch.</p><p>Multiband sequences can also interact with head-motion in weird ways — if you’re doing any kind of special population (patients, kids, whatever) where they’re prone to move a lot; fahgeddaboutit.</p><p>Don’t get me wrong, I <em>love </em>multiband, it’s amazing, but I use it very carefully. My ‘standard’ sequence these days is MB = 2 and GRAPPA = 2. This gives a combined 4x (MB*GRAPPA) acceleration, and gives you 40-ish slices of 3x3x3mm voxels — plenty for whole-brain coverage — and a TR of about 1.25s. I’ve pushed it to MB3 on occasion when I wanted to do specific things like use thinner slices to mitigate susceptibility problems in orbito-frontal cortex, but I wouldn’t push it higher than that. (I see no problem in combining multiband with slice-based acceleration like GRAPPA/SENSE, though I know some people are dead-set against it. GRAPPA/SENSE are old, tried-and-tested technology, and they work great. Why not use them and keep the multiband acceleration factor lower?).</p><p>If you’re scanning 200 people for an hour each at 7T and have experimental power to burn, then by all means — go nuts with multiband and small voxels. For most of us though, we should really care about optimising signal-to-noise in our datasets of 25–30 people scanned for 10 minutes at 3T. Don’t push the temporal or spatial resolutions if you don’t have to, and don’t push the multiband factor too high. You’re destroying your SNR. Do some testing of different sequences on your own scanner and look at the tSNR. Fuck it; send me your data and I’ll <em>do it for you </em>if you want. Stop murdering your experiments in their cradle with low-SNR sequences that won’t give you any usable results.</p><p>Here endeth the rant. Feel free to abuse me in the comments on here, <a href="https://twitter.com/m_wall?lang=en">on twitter</a>, or email me at mbwall [at] gmail if it’s particularly vitriolic and egregious.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2289b1a5a7b1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The secret history of psychedelic therapy in 1990s Ireland]]></title>
            <link>https://medium.com/@m.b.wall/the-secret-history-of-psychedelic-therapy-in-1990s-ireland-1e82c53db212?source=rss-bb4f2cd47757------2</link>
            <guid isPermaLink="false">https://medium.com/p/1e82c53db212</guid>
            <category><![CDATA[psychedelics]]></category>
            <category><![CDATA[psychedelic-therapy]]></category>
            <category><![CDATA[psychiatry]]></category>
            <dc:creator><![CDATA[Matt Wall]]></dc:creator>
            <pubDate>Wed, 01 Mar 2023 18:36:10 GMT</pubDate>
            <atom:updated>2023-03-02T06:58:55.359Z</atom:updated>
            <content:encoded><![CDATA[<p>Some time ago I was very honoured to be asked to write a cover article for ‘The Psychologist’ magazine (which you can <a href="https://www.bps.org.uk/psychologist/shaking-kaleidoscope-mind">read here</a>, if you like; please excuse the self-promotion, it’s relevant, honest), and as a result of that article coming out I received a rather intriguing email.</p><p><a href="https://www.gregmadison.net/">Greg Madison</a> is a clinical psychologist and psychotherapist, and he very kindly got in touch to say that he liked my article (thanks Greg!), but also mentioned that he’d been involved in psychedelic therapy in Dublin in 1991–1995, using ketamine to treat mostly patients with PTSD (post-traumatic stress disorder). The head of the clinic at the time was noted Irish psychiatrist <a href="https://en.wikipedia.org/wiki/Ivor_Browne">Prof. Ivor Browne</a>.</p><p>This is highly intriguing, as I’d never heard of any such treatments being used at that time. The conventional histories of psychedelic therapy (such as Ben Sessa’s excellent and comprehensive book chapter: <a href="https://shaunlacob.com/wp-content/uploads/2020/12/History-of-Psychedelics-in-Medicine-2016.pdf">PDF here</a>) usually regard the 1990s as a bit of a dark-age for psychiatric use of psychedelics, despite the growing recreational popularity of MDMA in the club/rave scene at the time. This was a time long after the wave of prohibition starting in the 1970s, and before the current revival of psychedelic therapy in the 21st century.</p><p>Based on some information and links that Greg kindly sent me, and some of my own digging, I’ve pieced together the following story. Prof. Browne seems to have been an advocate for the use of LSD in psychiatry in the 1950s/60s. I’ve found one published paper from 1960 co-authored by him (<a href="https://journals.sagepub.com/doi/pdf/10.1177/003591576005301108">PDF here</a>), describing the use of LSD therapy in a ‘Psychiatric Night Hospital’ in London. He also mentions his LSD work in <a href="https://www.cambridge.org/core/services/aop-cambridge-core/content/view/6CA5315D23C96EA8A12845F5466D9441/S0955603600106488a.pdf/in-conversation-with-ivor-browne.pdf">this interview</a> (publshed in <em>Psychiatric Bulletin</em> in 1992), as well as a very brief mention of ketamine therapy. In <a href="https://iahip.org/page-1076495">this interview</a> he also talks about LSD, and implies that he moved on to other methods such as holotropic breathwork once LSD became unavailable in the 1970s (but ketamine is not explicitly mentioned).</p><p>According to Greg, he and others were working with Prof. Browne from 1991–1995 in a deconsecrated church, on the grounds of St. Brendan’s Hospital, in Dublin. I believe I’ve identified the likely location, and it’s the chapel in the picture below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/529/0*DKyBxJs58it6twFz.jpg" /><figcaption>St. Laurence’s Chapel, St. Brendan’s Hospital, Dublin, Ireland.</figcaption></figure><p>In Greg’s words:</p><blockquote>We held weekly ketamine sessions for hospitalised PTSD patients, many having suffered abuse, rape, and other traumas. We also held occasional weekend workshops for the general public in holotropic breathwork and ketamine-assisted therapy work.</blockquote><p>He also pointed me to some corroborating evidence, which is a comment on a fairly recent article in the Irish Medical Times (<a href="https://www.imt.ie/clinical/ecstasy-may-enhance-benefits-psychotherapy-24-05-2018/">here</a>) by a Dr Kieran Moore:</p><blockquote>When I was a medical student in UCD, Ivor Browne was doing work with patients with severe PTSD (and possibly other illnesses) using Ketamine and breath work. He worked in the old church in St. Brendan’s hospital, and set and setting were very important as well.</blockquote><p>Greg also told me that it was all kept very quiet at the time, and the people involved were instructed not to talk about it. Apparently, the Irish government only agreed to the use of ketamine in this way as an inducement to keep Prof. Browne in Ireland, as he was being head-hunted by Harvard at the time. Presumably this is why the team involved never published anything about this therapy, and why it’s remained a pretty obscure piece of psychedelic history, until now.</p><p>So there you have it — ketamine and holotropic breathwork were being used in the early 1990s in Ireland for therapy with PTSD patients (and likely, others). Amazingly, that’s near-as-dammit a decade before the team at Yale published their <a href="https://www.sciencedirect.com/science/article/abs/pii/S0006322399002309?casa_token=nlcgr1AqYTYAAAAA:QSOljdL4hh28ekKla5hAw3xYFHkEse9_nIh64ipMku-OJ2-lc_L3hzskR5kbovtwVaxnpbasXhQ">now seminal paper</a> on the discovery of the rapid anti-depressant properties of ketamine. Fascinating stuff, and if I turn up anything more on this story, I’ll let you know.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1e82c53db212" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Blinding and placebo-control in psychedelic clinical trials]]></title>
            <link>https://medium.com/@m.b.wall/blinding-and-placebo-control-in-psychedelic-clinical-trials-142031fbbd4c?source=rss-bb4f2cd47757------2</link>
            <guid isPermaLink="false">https://medium.com/p/142031fbbd4c</guid>
            <category><![CDATA[psychedelics]]></category>
            <category><![CDATA[clinical-trials]]></category>
            <category><![CDATA[psychedelic-therapy]]></category>
            <category><![CDATA[placebo]]></category>
            <dc:creator><![CDATA[Matt Wall]]></dc:creator>
            <pubDate>Wed, 28 Sep 2022 12:13:28 GMT</pubDate>
            <atom:updated>2022-09-28T12:13:28.170Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*WwFjSxMYKmtadZrO" /><figcaption>Photo by <a href="https://unsplash.com/@towfiqu999999?utm_source=medium&amp;utm_medium=referral">Towfiqu barbhuiya</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Last week I was lucky enough to attend the <a href="https://icpr-conference.com/">ICPR meeting</a> in Haarlem, Netherlands along with a lot of colleagues from <a href="https://invicro.com/">Invicro</a> and <a href="https://www.imperial.ac.uk/psychedelic-research-centre/">Imperial College London</a>. The conference was really excellent, with a lot of exciting new results and perspectives being shared, and a lot of great new contacts made. I believe the recordings of the talks will be available online in some form, and they’ll be worth checking out if you have interests in this area and couldn’t attend.</p><p>Some of the discussion on <a href="https://twitter.com/hashtag/ICPR2022?src=hashtag_click">twitter</a> (at least on my feed) centred around the issues of blinding and placebo-controls in clinical trials of psychedelic drugs. Briefly, double-blinding is a standard feature of clinical trials, and means that neither the patient, or the trial staff who interact with the patient, know which treatment they’re taking (active drug, or placebo). This is regarded as an essential feature of many trials, as there are powerful psychological effects which can influence the trial otherwise. These include the placebo effect (patients showing positive/better outcomes, even with no active treatment), the nocebo effect (patients showing negative/worse outcomes, even with no active treatment), <a href="https://en.wikipedia.org/wiki/Observer-expectancy_effect">observer-expectancy effects</a> (the researchers subtly influencing the patient), <a href="https://en.wikipedia.org/wiki/Demand_characteristics">demand characteristics</a> (the patient telling the researchers what they think they want to hear), and a number of other subtle biases. In theory, if you use a placebo-control condition and double-blinding, then these effects are not eliminated, but at least they apply equally to both your active and placebo conditions, so any <em>additional </em>benefits you see in the active arm can be reliably attributed to the effect of the drug.</p><p>The problem with this approach in clinical trials of psychedelic drugs is that effective double-blinding is very, very hard. Psychedelics are the most powerful mind-altering substances we know of, and have very clear subjective and behavioural effects. If you’ve taken 25mg of psilocybin, then it’s usually pretty clear to you (the patient), and anyone observing you, that you’ve taken an active dose, as opposed to a sugar pill.</p><p>This is therefore an important problem; in the absence of effective double-blinding, how can we be certain that the (generally, very positive) effects we see in psychedelic clinical trials are genuine, and not some kind of very powerful placebo effect?</p><p>Fortunately, researchers have come up with a number of ways of getting around this issue. None of them are perfect, and all are open to various critiques, but they’re currently the best approaches we’ve got.</p><p><strong>Low-dose psychedelics<br></strong>Low doses of psychedelics are sometimes used as placebo conditions in clinical trials. This is an alternative to ‘true’ placebos (i.e. sugar, or cellulose pills), as the doses are so low as to be completely inactive (e.g. 1mg of psilocybin). This was an approach used in the second Imperial College depression clinical trial (1), partly for ethical reasons, as the researchers were then able to tell the patients that all subjects in the trial would receive psilocybin (albeit, some at a very low dose).</p><p><strong>Active placebos<br></strong>The thinking here is that using another active drug which also gives some subjective and behavioural effects as a comparison condition will help to maintain the blinding and obscure which treatment is which (from both the patient and the researcher), particularly from psychedelic-naive patients. The choice of which active placebo to use is clearly crucial. Ideally, you’d want something which is similar enough to the effects of psychedelics, but that doesn’t hit similar brain systems. This is clearly pretty hard — psychedelics have pretty unique and characteristic effects. A recent trial in alcoholic patients (2) used diphenhydramine, an anti-histamine which causes drowsiness. Unfortunately this active placebo wasn’t successful at maintaining the blinding; more than 90% of both patients and therapists correctly guessed the treatment in both the dosing sessions.</p><p>Another previous study (3) compared the effects of psilocybin and dextromethorphan (an NMDA receptor antagonist, somewhat similar to ketamine, but with a complex pharmacology). Even when comparing these two drugs (a ‘classic’ psychedelic, and an ‘atypical’ psychedelic) where the effects are somewhat similar, the majority of subjects were able to correctly guess the drug class they’d been given.</p><p><strong>Dose-dependent effects<br></strong>An alternative strategy is to not use a placebo treatment at all, but to use varying doses of an active compound, and examine dose-dependent effects. Assuming placebo/expectancy effects are similar for all doses, any difference in the effectiveness of different doses must be due to the drug. This was the approach used in the recent <a href="https://compasspathways.com/wp-content/uploads/2022/06/COMP001_Topline_Data.pdf">Compass Pathways trial</a> of treatment-resistant depression, where 1mg, 10mg, and 25mg doses of psilocybin were used in different patient groups. Those who received the highest dose had significantly better and more sustained outcomes than the lower doses. To my mind, this is perhaps the most powerful and easily-interpretable approach.<br>(Though I do wish Compass would publish a proper paper on this trial; the data have only been available in a press release so far! This makes it hard to evaluate properly.)</p><p><strong>Comparison with other treatments<br></strong>Another common approach in clinical trials is to compare a new treatment with the current best (or most common) treatment for a particular condition. This was also a comparison made in the Imperial depression trial (1) where a first-line treatment for depression was used as a comparator (escitalopram, a standard selective-serotonin reuptake inhibitor anti-depressant). These comparisons are not so much about controlling placebo or expectancy effects, but more about demonstrating the effectiveness of a new treatment relative to current, commonly-used ones.</p><p><strong>Objective assessments<br></strong>Subjective assessments are often the primary outcome in clinical trials for psychiatric disorders; for instance, these might be changes in depression scores on a self-reported depression questionnaire, like the Beck Depression Inventory (BDI). While these measures have a high degree of validity (if a patient reports that they’re feeling better, then they probably are!) they are vulnerable to demand characteristics. Objective assessments of drug effects are therefore a good adjunct measure in clinical trials, and for psychiatric conditions, that often means some kind of neuroimaging. If we can see a clear drug effect on the brain that differs from the control treatment, that’s additional evidence that drug is doing something useful. Of course, placebo effects could conceivably have effects on the brain, so this is not a perfect solution either, but it does get around the issue of demand characteristics in subjective measures. I’ve argued that <a href="https://psyarxiv.com/xwu4j/">neuroimaging should play a central role in the ongoing development of psychedelic treatments</a>, (4) and differences in the brain function of patients in the Imperial psychedelic trial have been identified (5), which is a good proof-of-concept of the general approach. These measures also help to delve deeper into the possible mechanisms of how these drugs exert their effects.</p><p>These are all the different approaches that I’m currently aware of, though there may well be others… None of them are perfect, and I think we ideally need different clinical trials that use different approaches, or perhaps even several of them in combination. Effective blinding and the selection of appropriate control conditions are likely to remain big issues for psychedelic treatment development, and, unfortunately at the moment, there don’t seem to be any particularly easy or completely satisfactory solutions.</p><p><strong>References:</strong></p><ol><li>Carhart-Harris, R., Giribaldi, B., Watts, R., Baker-Jones, M., Murphy-Beiner, A., Murphy, R., Martell, J., Blemings, A., Erritzoe, D., &amp; Nutt, D. J. (2021). Trial of Psilocybin versus Escitalopram for Depression. <em>New England Journal of Medicine</em>, <em>384</em>(15), 1402–1411. <a href="https://doi.org/10.1056/nejmoa2032994">https://doi.org/10.1056/nejmoa2032994</a></li><li>Bogenschutz, M. P., Ross, S., Bhatt, S., Baron, T., Forcehimes, A. A., Laska, E., Mennenga, S. E., O’Donnell, K., Owens, L. T., Podrebarac, S., Rotrosen, J., Tonigan, J. S., &amp; Worth, L. (2022). Percentage of Heavy Drinking Days Following Psilocybin-Assisted Psychotherapy vs Placebo in the Treatment of Adult Patients With Alcohol Use Disorder: A Randomized Clinical Trial. <em>JAMA Psychiatry</em>. <a href="https://doi.org/10.1001/jamapsychiatry.2022.2096">https://doi.org/10.1001/jamapsychiatry.2022.2096</a></li><li>Carbonaro, T. M., Johnson, M. W., Hurwitz, E., &amp; Griffiths, R. R. (2018). Double-blind comparison of the two hallucinogens psilocybin and dextromethorphan: Similarities and differences in subjective experiences. <em>Psychopharmacology</em>, <em>235</em>(2), 521–534. <a href="https://doi.org/10.1007/s00213-017-4769-4">https://doi.org/10.1007/s00213-017-4769-4</a></li><li>Wall, M. B., Harding, R., Zafar, R., Rabiner, E. A., Nutt, D. J., &amp; Erritzoe, D. (2022). <em>Neuroimaging in psychedelic drug development: Past, present, and future</em>. PsyArXiv. <a href="https://doi.org/10.31234/osf.io/xwu4j">https://doi.org/10.31234/osf.io/xwu4j</a></li><li>Daws, R. E., Timmermann, C., Giribaldi, B., Sexton, J. D., Wall, M. B., Erritzoe, D., Roseman, L., Nutt, D., &amp; Carhart-Harris, R. (2022). Increased global integration in the brain after psilocybin therapy for depression. <em>Nature Medicine</em>. <a href="https://doi.org/10.1038/s41591-022-01744-z">https://doi.org/10.1038/s41591-022-01744-z</a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=142031fbbd4c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Will imaging-based biomarkers of psychiatric conditions ever be clinically useful?]]></title>
            <link>https://medium.com/@m.b.wall/will-imaging-based-biomarkers-of-psychiatric-conditions-ever-be-clinically-useful-99c158a2d1b1?source=rss-bb4f2cd47757------2</link>
            <guid isPermaLink="false">https://medium.com/p/99c158a2d1b1</guid>
            <category><![CDATA[neuroimaging]]></category>
            <category><![CDATA[biomarker]]></category>
            <category><![CDATA[psychiatry]]></category>
            <category><![CDATA[brain]]></category>
            <dc:creator><![CDATA[Matt Wall]]></dc:creator>
            <pubDate>Sat, 30 Jul 2022 07:20:50 GMT</pubDate>
            <atom:updated>2022-07-30T07:20:50.164Z</atom:updated>
            <content:encoded><![CDATA[<h3>Will imaging-based biomarkers of psychiatric conditions ever be clinically useful? A brief comment on two recent papers.</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*w_gnm0Pv6UzeU4ZS0FhDfQ.jpeg" /><figcaption>Brainzzzzzz</figcaption></figure><p>Biomarkers are an important and increasingly-useful aspect of clinical medicine. Perhaps the most well-developed biomarkers are used in oncology; a simple blood test can screen for a large number of proteins which are indicative of the presence of cancer in the patient, and may even indicate the type and location in the body of particular cancers. These blood screens are a standard part of the diagnostic toolkit in oncology.</p><p>The search for imaging-based biomarkers (or i-biomarkers) is probably as old as the discovery of the X-Ray itself, but it was given new impetus in the last 20 years in psychiatry by the development of sophisticated neuroimaging methods such as Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI). Psychiatric conditions are different to physical/organic conditions like cancer as they are diagnosed not by the presence of any biological feature, but by the presence of a number of behaviours or other symptoms. The hope has been that by examining the brains of patients and developing i-biomarkers, we might provide a more objective clinical test for these conditions.</p><p>To be clinically useful, a biomarker needs to do one of two things:</p><ol><li>Be diagnostically useful, i.e. help in (differential) diagnosis of particular conditions.</li><li>Be predictively useful, i.e. give some indication of either the potential prognosis of a condition, or the likely response to particular treatments.</li></ol><p>Unfortunately, despite a great amount of activity and work in this space, the search for reliable i-biomarkers has so far been (largely) unsuccessful. The reasons why this is the case are complex and interested readers can check out an excellent and very readable review paper from 2016 which is a great summary of the issues (1). We have certainly made some great progress in understanding the neurobiology of psychiatric disease in the last 20–30 years, however the field remains a quite academic pursuit, with none of the identified differences in the brains of patients being sufficiently reliable or robust enough to make the translation to clinical practice.</p><p>I was inspired to write this piece by two papers which appeared this week, on exactly this issue. Both are pretty depressing, and suggest that perhaps the development of i-biomarkers in psychiatry might never actually work.</p><p>The first was sent to me by my current <a href="https://twitter.com/RayyanZafar6">PhD student </a>(thanks Rayyan!) and was just published in JAMA Psychiatry (2). This is an amazing study which represents an enormous amount of work, as it uses multiple types of MRI data (structural, functional; task and resting-state, diffusion) to derive 11 different imaging measures, across 1809 subjects (861 depression patients, and 948 healthy controls). The key finding here was that none of these putative imaging markers could reliably distinguish between patients and controls, with classification accuracy between 53% and 55% (i.e. close to chance levels of 50%). In contrast, measures of environmental factors (social support and childhood maltreatment) from simple questionnaires were able to distinguish patients from controls with 70% accuracy! Overall, the group differences in neuroimaging measures explained just 2% of the variance in the data! This is… not good.</p><p>The question then becomes… why? Why is this result so poor? We know that psychiatric diseases must be instantiated in the brain, so surely by measuring features of the brain in the right way, we should be able to derive diagnostic features of these disorders, right?</p><p>One possible answer to this is that the measurement tools we’re using (i.e. neuroimaging) are so unreliable and noisy that we just can’t get an accurate measurement. Some types of (f)MRI measures do indeed show pretty poor levels of reliability (3), and there have recently been big efforts to improve this by various means (standardised/optimised procedures, larger sample sizes, etc.). However some measures (e.g. structural MRI) are actually pretty reliable.</p><p>Another possible answer is that we’re measuring the wrong things, in the wrong samples. This is the thesis of another intriguing paper (actually a pre-print) which came out this week (4). These authors have used a combination of empirical and simulated data to demonstrate that, while reliability of the methods is important, it’s not the whole story. They show that even if we had perfectly reliable and robust measures, we still wouldn’t get reliable biomarkers, because the clinical/phenotypic categories we’re using to characterise patient groups are <em>also</em> fundamentally unreliable! For example, what we might call “clinical depression” may actually be a heterogeneous set of related syndromes, which may all have their own specific and different set of brain-based features. By lumping all these different things together and treating them as a single condition, we’re missing this crucial variability.</p><p>If it’s correct, this means that the entire approach is fundamentally flawed, and no matter how much we improve the measurement methods, we’ll never derive reliable i-biomarkers of these conditions. There is some reason for hope though; the authors suggest that improvements in clinical, behavioural, and cognitive assessments enabled through new technology (web- and smartphone-based data collection, which is becoming a huge area of interest and work) might lead to better phenotypic characterisation of these conditions.</p><p>One thing I’ve learned over 20 years of doing neuroimaging research is that you never know what’s around the corner; the steady improvements in methods and technology shows no sign of slowing down, and novel methods of data acquisition and analysis come along all the time. Integrating neuroimaging methods with other things (behavioural data, genetics, etc.) is also a promising avenue. I don’t think these two papers mean that we should give up the search for i-biomarkers in psychiatry, but I do think they underline the point that the journey will very likely not be an easy or short one.</p><p><strong>References</strong></p><ol><li>Abi-Dargham, A., &amp; Horga, G. (2016). The search for imaging biomarkers in psychiatric disorders. <em>Nature Medicine</em>, <em>22</em>(11), 1248–1255. <a href="https://doi.org/10.1038/nm.4190">https://doi.org/10.1038/nm.4190</a></li><li>Winter, N. R., Leenings, R., Ernsting, J., Sarink, K., Fisch, L., Emden, D., Blanke, J., Goltermann, J., Opel, N., Barkhau, C., Meinert, S., Dohm, K., Repple, J., Mauritz, M., Gruber, M., Leehr, E. J., Grotegerd, D., Redlich, R., Jansen, A., … Hahn, T. (2022). Quantifying Deviations of Brain Structure and Function in Major Depressive Disorder Across Neuroimaging Modalities. <em>JAMA Psychiatry</em>. <a href="https://doi.org/10.1001/jamapsychiatry.2022.1780">https://doi.org/10.1001/jamapsychiatry.2022.1780</a></li><li>Elliott, M., Knodt, A., Ireland, D., Morris, M., Poulton, R., Ramrakha, S., Sison, M., Moffitt, T., Caspi, A., &amp; Hariri, A. (2020). What is the Test-Retest Reliability of Common Task-fMRI Measures? New Empirical Evidence and a Meta-Analysis. <em>Psychological Science</em>, <em>87</em>(9), S132–S133. <a href="https://doi.org/10.1016/j.biopsych.2020.02.356">https://doi.org/10.1016/j.biopsych.2020.02.356</a></li><li>Nikolaidis, A., Chen, A. A., He, X., Shinohara, R., Vogelstein, J., Milham, M., &amp; Shou, H. (2022). <em>Suboptimal phenotypic reliability impedes reproducible human neuroscience</em> (p. 2022.07.22.501193). bioRxiv. <a href="https://doi.org/10.1101/2022.07.22.501193">https://doi.org/10.1101/2022.07.22.501193</a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=99c158a2d1b1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A round-up of some of our recent cannabis research publications]]></title>
            <link>https://medium.com/@m.b.wall/a-round-up-of-some-of-our-recent-cannabis-research-publications-dc16064bde9d?source=rss-bb4f2cd47757------2</link>
            <guid isPermaLink="false">https://medium.com/p/dc16064bde9d</guid>
            <category><![CDATA[fmri]]></category>
            <category><![CDATA[cannabis-medical]]></category>
            <category><![CDATA[cannabis]]></category>
            <category><![CDATA[neuroscience]]></category>
            <category><![CDATA[neuroimaging]]></category>
            <dc:creator><![CDATA[Matt Wall]]></dc:creator>
            <pubDate>Fri, 15 Jul 2022 17:12:11 GMT</pubDate>
            <atom:updated>2022-07-15T17:12:11.343Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*F_5Ri-u6LnJjSpll" /><figcaption>Photo by <a href="https://unsplash.com/@daconja?utm_source=medium&amp;utm_medium=referral">David Gabrić</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>For about seven or eight years now, I have had the genuine and unalloyed pleasure of working with a great bunch of people at the University College London <a href="https://www.ucl.ac.uk/psychopharmacology/">Clinical Psychopharmacology Unit</a> and contributing to a number of different projects focussed on the brain effects of cannabinoids. As cannabis transitions towards legal or semi-legal status in a number of places around the world, more people will potentially be exposed to cannabinoids, so understanding the effects on the brain and why/how some users might transition from casual use to addiction, psychosis, or <a href="https://en.wikipedia.org/wiki/Cannabis_use_disorder">Cannabis Use Disorder</a> (CUD) is a strong priority.</p><p>One theme in this work has been the idea that modern high-strength cannabis which can be very high in THC (but low in other cannabinoids such as cannabidiol, or CBD) might be particularly problematic for users. THC and CBD have opposite effects in some respects, so the hypothesis is that the presence of CBD might insulate or ‘buffer’ the user against the harmful effects of THC, to some extent.</p><p><strong>Links to all the papers cited below can be found in the references section at the bottom.</strong></p><p>Our first study was designed to test exactly that idea using resting-state functional MRI to look at key brain networks and the effects of different types of cannabis. We found that subjects who were given pure-THC cannabis showed strong disruption in the brain’s salience network, but those who were given a more balanced strain (the same amount of THC, but also containing CBD) showed less disruption (1). We’ve also recently shown similar effects in the same cohort in brain networks centred on the striatum (less disruption with THC) and even found that pure CBD can increase connectivity in striatal networks somewhat (2).</p><p>Following these studies, we embarked on a <em>massive </em>MRC-funded follow-up study, called the <a href="https://www.ucl.ac.uk/psychopharmacology/trials/cannteen">CannTeen</a> (cannabis in teenagers) project. This was intended to look at a number of aspects of cannabis use (different types of cannabis, regular users vs. non-users) in matched groups of adults (26–29 years old) and teenagers (16–17 years old). Cannabis is a popular drug for teenagers and there’s some concern that it may have different, or perhaps more harmful, effects on brains that are still undergoing quite a lot of development in the teenage years. This project took a solid five years to complete, and involved about 450 scanning sessions at the <a href="https://invicro.com/">Invicro</a> clinical imaging facility in West London, as well as many more behavioural/questionnaire sessions conducted by the team at UCL. It’s the largest neuroimaging study looking at the effects of cannabis ever conducted. We’re still sorting through the enormous piles of data we’ve got from this study, but there have been a few papers emerging already.</p><p>First, using a task that measures reward function in the brain (the monetary incentive delay task) we found that there was no difference between cannabis users and non-users in the brain’s main reward centre, but some increased responses in cannabis users in the frontal cortex. Interestingly, there was also no difference seen between the adolescent and the adult users (3).</p><p>In some other data which used a behavioural-economics approach, we’ve shown that cannabis users (both adolescents and adults) are more sensitive to immediate rewards (compare to future rewards) than non-users. Adolescents also showed less sensitivity to cannabis price increases here, and willingness to consume higher amounts of cannabis when it was free (4).</p><p>Other behavioural data with more standard cognitive tasks has shown that users have somewhat worse verbal episodic memory, but spatial working memory and response inhibition seems not to be affected. Adolescents and adults were also equivalent on these measures (5). A more clinically-focussed paper found that adolescent users were more likely to present symptoms of severe CUD, and psychotic-like symptoms, but there were no differences in symptoms of anxiety or depression (6).</p><p>As if all that wasn’t enough, there are other recent papers from a separate study where volunteers were given pure CBD. These papers have showed that CBD appears not to have any effect on reward processing in the brain (7), and that it also seems to have no effect on emotional processing or experimentally-induced anxiety (8).</p><p>All these studies have been massive team efforts, and I’m tremendously grateful to have played a small part in them. I don’t want to single out any of my colleagues in particular, as everyone’s worked hard on them, but just wanted to acknowledge that this kind of science is most definitely a team sport! Plus, there’s <em>lots </em>more to come from the CannTeen study in particular — maybe I’ll do another round-up post in a year or so as an update.</p><p><strong>References</strong></p><ol><li>Wall, M. B., Pope, R., Freeman, T. P., Kowalczyk, O. S., Demetriou, L., Mokrysz, C., … Curran, H. V. (2019). Dissociable effects of cannabis with and without cannabidiol on the human brain’s resting-state functional connectivity. <em>Journal of Psychopharmacology</em>, <em>33</em>(7), 822–830. <a href="https://doi.org/10.1177/0269881119841568">https://doi.org/10.1177/0269881119841568</a></li><li>Wall, M. B., Freeman, T. P., Hindocha, C., Demetriou, L., Ertl, N., Freeman, A. M., … Bloomfield, M. (2022). Individual and combined effects of Cannabidiol (CBD) and Δ9-tetrahydrocannabinol (THC) on striato-cortical connectivity in the human brain. <em>Journal of Psychopharmacology</em>, 2020.11.20.391805. <a href="https://doi.org/10.1177/02698811221092506">https://doi.org/10.1177/02698811221092506</a></li><li>Skumlien, M., Mokrysz, C., Freeman, T. P., Wall, M. B., Bloomfield, M., Lees, R., … Lawn, W. (2022). Neural responses to reward anticipation and feedback in adult and adolescent cannabis users and controls. <em>Neuropsychopharmacology</em>. <a href="https://doi.org/10.1038/s41386-022-01316-2">https://doi.org/10.1038/s41386-022-01316-2</a></li><li>Borissova, A., Soni, S., Aston, E. R., Lees, R., Petrilli, K., Wall, M. B., … Lawn, W. (2022). Age differences in the behavioural economics of cannabis use : Do adolescents and adults differ on demand for cannabis and discounting of future reward ? <em>Drug and Alcohol Dependence</em>, <em>238</em>(November 2021), 109531. <a href="https://doi.org/10.1016/j.drugalcdep.2022.109531">https://doi.org/10.1016/j.drugalcdep.2022.109531</a></li><li>Lawn, W., Fernandez-Vinson, · N, Mokrysz, · C, Hogg, · G, Lees, · R, Trinci, · K, … Curran, · H V. (2022). The CannTeen study: verbal episodic memory, spatial working memory, and response inhibition in adolescent and adult cannabis users and age-matched controls. <em>Psychopharmacology 2022</em>, <em>1</em>, 1–13. <a href="https://doi.org/10.1007/S00213-022-06143-3">https://doi.org/10.1007/S00213-022-06143-3</a></li><li>Lawn, W., Mokrysz, C., Lees, R., Trinci, K., Petrilli, K., Skumlien, M., … Curran, V. (2022). The CannTeen Study: Cannabis use disorder, depression, anxiety, and psychotic-like symptoms in adolescent and adult cannabis users and age-matched controls. <em>Journal of Psychopharmacology</em>, <a href="https://doi.org/https://doi.org/10.1177/02698811221108956">https://doi.org/https://doi.org/10.1177/02698811221108956</a></li><li>Lawn, W., Hill, J., Hindocha, C., Yim, J., Yamamori, Y., Jones, G., … Bloomfield, M. A. P. (2020). The acute effects of cannabidiol on the neural correlates of reward anticipation and feedback in healthy volunteers. <em>Journal of Psychopharmacology</em>, <em>34</em>(9), 969–980. <a href="https://doi.org/10.1177/0269881120944148">https://doi.org/10.1177/0269881120944148</a></li><li>Bloomfield, M. A., Yamamori, Y., Hindocha, C., Jones, A. P. M., Yim, J. L. L., Walker, H. R., … Freeman, T. P. (2022). The acute effects of cannabidiol on emotional processing and anxiety: A neurocognitive imaging study. <em>Psychopharmacology</em>. <a href="https://doi.org/10.1007/s00213-022-06070-3">https://doi.org/10.1007/s00213-022-06070-3</a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dc16064bde9d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to become a (psychedelic) drug researcher]]></title>
            <link>https://medium.com/@m.b.wall/how-to-become-a-psychedelic-drug-researcher-1b723efcd78b?source=rss-bb4f2cd47757------2</link>
            <guid isPermaLink="false">https://medium.com/p/1b723efcd78b</guid>
            <category><![CDATA[academia]]></category>
            <category><![CDATA[cannabis]]></category>
            <category><![CDATA[research]]></category>
            <category><![CDATA[drugs]]></category>
            <category><![CDATA[psychedelics]]></category>
            <dc:creator><![CDATA[Matt Wall]]></dc:creator>
            <pubDate>Thu, 16 Jun 2022 07:58:02 GMT</pubDate>
            <atom:updated>2022-06-16T07:58:02.033Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*8DoyNqfbifzFLdHs" /><figcaption>Photo by <a href="https://unsplash.com/@cdc?utm_source=medium&amp;utm_medium=referral">CDC</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>A question that students have asked me fairly often over the years since I’ve been doing drug research is some variation of “How can I become a drug researcher myself?” This is a very nice question to get, in that it’s very gratifying to be in a field that young people aspire to work in, but it’s also one that I’ve struggled to answer, for two reasons.</p><p>Firstly because I feel like my own career-path (more of a career-meander really) is a terrible example. My PhD was in cognitive psychology, then I worked on low-level vision for several years, then another post-doc on a pain-related project, and finally I got into drug research largely by happening to be in the right place at the right time with the right skills. So in other words, for me, it was largely dumb luck and taking opportunities that happened to present themselves. Not really a platform that you can build some helpful advice on.</p><p>The second reason is that (until quite recently) the pool of available opportunities was very severely limited, particularly in the psychedelic field. That’s changing somewhat now, but my advice a few years ago was essentially: don’t bother. Go and find some other research area that you’re interested in and do that instead. This was clearly always something of a disappointment for the bright-eyed, eager student and I always felt like I was kicking a puppy when I said something like that.</p><p>I still think that “go and do something else instead” is actually quite good advice though. The fields of psychology and neuroscience are incredibly broad, and most research areas are genuinely fascinating if you start reading about them and get into it to a sufficient degree. The first thing you need to do if you want to become any kind of professional scientist or researcher is some kind of post-graduate training, and competition for PhD places in particular is <em>intense. </em>Keeping an open mind about the particular research field you want to work in will mean you have a much wider pool of positions you can apply for, and substantially increase your chances of getting on a PhD program. Even if you end up doing a PhD project in a field that’s some distance from drug research, there may be opportunities to steer it in that direction, or you may find you can work on side-projects with other people in your department who are doing more drug-related stuff. You may even find you develop a life-long passion for the particular research field you’ve ended up in — that’s great! Pursue that instead. Alternatively, you may perhaps do a PhD and a post-doctoral position in some other research area, end up as a junior academic and then have the freedom to start moving in the direction of drug research again — that’s great too!</p><p>I imagine that at least some people reading this are now thinking “That’s all very well, but for as long as I can remember it’s been my life-long, core-deep, desperate dream to be a psychedelic researcher and if I don’t achieve that dream then I am doomed to a life of suffering and regret, haunted by the ghosts of unachieved ambitions which will poison all my relationships and any chance for happiness and eventually I’ll die friendless, destitute, and alone.” To this, I would say a) don’t be so bloody dramatic, and b) the rest of this piece is for you.</p><p>As I see it, there are four main routes to getting involved in drug/psychedelic research. I’m not going to sugar-coat it — they’re all <em>very</em> hard. They all involve many years of study and work, and none of them are guaranteed. What’s common to all of them though, is that a focus on acquiring <em>skills</em> is more likely to lead to success. Skills, rather than knowledge, are what’s more important in research. If I’m looking to recruit someone for a research job, I don’t really care how much they know about the brain, or pharmacology, or whatever. I care about whether they can do the job, and the job involves programming, statistics, and dealing with weird specialist neuroimaging software. Yes, knowing some brain anatomy and pharmacology is useful, but you can pick that stuff up along the way. Getting some research experience and thereby acquiring some skills is incredibly important. They could be programming, scientific writing, questionnaire design and testing, or anything else useful or relevant. This is probably the only instructive lesson from my own career — I happened to have the right skills that people needed for the research they wanted to do, and that made me useful.</p><p>So, the four main routes are:</p><ol><li>A PhD. Doesn’t necessarily have to be a PhD in anything related — plenty of engineering or computer science PhDs end up working in neuroscience and doing very well, because (guess what?) they’ve <em>got good skills</em>. Doing a PhD in any field is a serious undertaking, and you will likely be poor and over-worked for considerably longer than your contemporaries. Nevertheless, it’s a necessary hurdle to get over if you seriously want to be an academic researcher.</li><li>Clinical psychology. Clinical psychology training usually gives you a DCLinPsy degree, meaning it’s equivalent to a doctorate. The main problem here is actually getting a place on a clinical psychology training program — they are massively over-subscribed and competitive (in the UK), and many people slave away for years in low-paid (sometimes even unpaid) assistant psychologist positions before being able to get a place. My feeling though, is that clinical psychology is going to play a vital role in the development of psychedelic therapies in the years ahead, so there may well be increasing opportunities for clinical psychologists to get involved.</li><li>Psychiatry. In some ways this may be the easiest route for being able to transition into drug research after your basic training; qualified psychiatrists are vital to drug research, and are a scarce resource for people working in this area. However, becoming a qualified psychiatrist involves maybe 10+ years of going to medical school, junior doctor positions working in a hospital in a number of different departments, and then further training in psychiatry, so… not exactly ‘easy’ by any definition.</li><li>Industry/commercial drug development. Until quite recently this was a complete non-starter, as the number of commercial organisations doing psychedelic drug research was tiny. However, we’re currently in a phase of massive commercial interest in this stuff, and new companies are popping up on almost a weekly basis. Getting a low-level position at one of these companies may be the easiest route in, but is also the least certain. A lot of these start-ups are small, some are probably doomed to fail, and at some point if you’re serious about a research career you’ll probably need to do some kind of formal training (e.g. a PhD or equivalent) later anyway. Maybe a company would sponsor or support you to do that? Who knows. It’s a very febrile and fast-moving area at the moment, and it’s not at all clear how it’s all going to shake out in the medium-to-long-term.</li></ol><p>This may be dispiriting. Sorry about that. The good news is, there has never been a better time for people who are keen on this stuff to make an impact. Ten years ago the number of research groups working on this was in the low single-digits, now there are new labs opening at many major universities around the world, funding is starting to come through, and the commercial side is booming too. If you’re willing to put in the work, do the training, and develop the skills, there’s now a real chance that you can end up working in this area.</p><p>Good luck!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1b723efcd78b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[My (very preliminary) thoughts on the Kernel Flow device]]></title>
            <link>https://medium.com/@m.b.wall/my-very-preliminary-thoughts-on-the-kernel-flow-device-3c3ad838d85b?source=rss-bb4f2cd47757------2</link>
            <guid isPermaLink="false">https://medium.com/p/3c3ad838d85b</guid>
            <category><![CDATA[psychedelics]]></category>
            <category><![CDATA[neuroimaging]]></category>
            <category><![CDATA[fnir]]></category>
            <category><![CDATA[brain-imaging]]></category>
            <category><![CDATA[ketamine]]></category>
            <dc:creator><![CDATA[Matt Wall]]></dc:creator>
            <pubDate>Thu, 19 May 2022 08:17:49 GMT</pubDate>
            <atom:updated>2022-05-21T05:26:24.878Z</atom:updated>
            <content:encoded><![CDATA[<p>UPDATE: 21st May, 2022: Dr Ryan Field (the presenter I mention in the original piece) has been kind enough to share some detailed responses to some of my questions in the comments below — definitely check out his comment after you read this piece!</p><p>I was lucky enough to attend the <a href="https://www.psychsymposium.com/">Psych Symposium </a>in London last week, and one of the more intriguing presentations was by Dr Ryan Field of <a href="https://www.kernel.com/">Kernel</a> in which he showcased the Kernel Flow brain imaging device. Kernel have <a href="https://www.kernel.com/news/kernel-cybin-partner">recently partnered with Cybin</a> (one of the larger psychedelic drug companies, and one of the more advanced, in terms of clinical trial pipelines) to use the Kernel Flow device in a pilot study of the effects of ketamine on the brain.</p><p>So, what is the Kernel Flow device? It’s a wearable brain imaging device that looks like a kind of segmented helmet, and is based on time-domain functional Near Infrared Spectroscopy (TD-fNIRS) technology.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*7ljwF4lIRnVr8Zxn.jpg" /><figcaption>The Kernel Flow device, modelled here by the company’s founder Bryan Johnson.</figcaption></figure><p>The basic technology of <a href="https://en.wikipedia.org/wiki/Functional_near-infrared_spectroscopy">fNIRS</a> has been around since the 1980s, and it involves shining near-infrared light through the head. Haemoglobin in the blood is a good absorber of near-infrared light, and fNIRS is capable of distinguishing concentrations of oxy- and deoxy-haemoglobin so it’s possible to get information about haemodynamic changes in the brain, with a similar signal to the <a href="https://en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging">BOLD signal used in standard functional MRI</a>. Because it’s completely non-invasive, harmless, and relatively lightweight and portable, fNIRS is a popular technique in some of the cutest neuroimaging studies ever conducted, on babies and infants.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*UFWlkmchzWngWykK" /><figcaption>Awwww…. bless.</figcaption></figure><p>The technique has strong limitations though. The infra-red light gets scattered by passing through the scalp and skull, and this limits the spatial resolution of fNIRS to about 2–3cm; an order of magnitude bigger than typical spatial resolutions used in fMRI. The temporal resolution is typically around 10Hz (or, sampling ten times per second) which is not as good as other methods such as EEG, but much better than fMRI. In addition, the penetration depth of the light is pretty low — around 1.5–2cm. This means it’s only really good for recording from the most superficial layers of the cortex. For a really good overview of fNIRS principles and techniques, see <a href="https://nyaspubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/nyas.13948">this review paper from 2018</a>.</p><p>Time-domain fNIRS is a relatively recent development where pico-second scale (i.e. very, <em>very</em> short) infrared laser pulses are used, and this allows acquisition of additional information derived from the timing of arrival of photons at the detectors. Since the photons are only travelling a few centimetres at the speed of light, you need <em>exquisitely</em> precise methods to measure the timing variations; only possible with modern computer chips that can do pico-second level timing.</p><p>Traditional fNIRS devices use a cap that’s tethered with a cable to an amplifier and interface box, and then a computer to record the data. What the team at Kernel have done is minituarise the whole apparatus, made it wireless, and put it into a wearable helmet-like device. This is definitely a significant and impressive technical achievement. However, the amount of real data they’ve presented so far is pretty thin, and as a neuroimager and neuroscientist, I have questions. Lots of them.</p><p>The Kernel team have published <a href="https://www.spiedigitallibrary.org/journals/journal-of-biomedical-optics/volume-27/issue-07/074710/Kernel-Flow--a-high-channel-count-scalable-time-domain/10.1117/1.JBO.27.7.074710.full?SSO=1">one formal paper</a>, which has lots of details on the technical specifications of the device, but only a brief discussion of its actual performance, and some very preliminary data from two subjects doing a finger-tapping task, with recordings from primary motor cortex. They claim their sampling rate is 200Hz, which is technically impressive, but given that the signals recorded are haemodynamic (blood flow) changes which are slow (on the order of 5–6 seconds) this is not so important; even a ‘standard’ 10Hz rate is still massively over-sampling the haemodynamic signal, so I’m not sure what extra (useful) information a 200Hz sampling rate would give you. The paper doesn’t seem to have any information about spatial resolution or penetration depth; arguably the more important characteristics of the system.</p><p>The data presented at the Psych Symposium was also… kinda weird. You can see the results in <a href="https://www.businesswire.com/news/home/20220509005244/en/Cybin-and-Kernel-Announce-Results-from-Kernel-Flow%C2%AE-Piloting-of-Feasibility-Study-Measuring-Ketamine%E2%80%99s-Effects-on-the-Brain">this press release</a>, but the key slide was this one:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IcBl4CFrrCUcDwmKvLpeMw.png" /></figure><p>This seems to show patterns of functional connectivity for five days before, and five days after, a ketamine dosing session, in a single subject (actually the Kernel company founder, Bryan Johnson). It wasn’t clear how they actually derived these connectivity maps, and what they really mean. There seems to be a strong asymmetry in them, with one (pink) hub in the left frontal lobe which has a strong connection to lots of other areas. As I said, kinda weird.</p><p>Also, in the image below, they seem to have derived network plots which include relatively deep-brain structures, with the nodes extending down to at least the level of the thalamus/striatum and some nodes on the medial surfaces of the cortical hemispheres. This seems… odd, given the strong limits on penetration depth of all previous fNIRS systems. Unless the Kernel team have made a real game-changing breakthrough in the technology, this seems unlikely.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*q8WCmH_VNSCKc0rY2I7iEw.png" /></figure><p>Clearly the Kernel team have made a highly-advanced device which can potentially make a real contribution to neuroimaging research. These developments in wearable brain imaging devices like Kernel Flow (and the recent<a href="https://www.sciencedirect.com/science/article/pii/S105381192030481X"> OPM-MEG innovations</a>) can potentially mean that we can do neuroimaging outside the confines of an MRI or PET scanner; or even leave the laboratory behind and acquire functional brain data out in the ‘real’ world. This is definitely an exciting prospect, however I think it remains to be seen whether the portability/ease of use are truly useful innovations, and outweigh the strong limitations on the quality and kinds of data that can be acquired from such devices based on fNIRS technology. I’m very much looking forward to seeing more data and formal write-ups from the Kernel Flow device and what kinds of data it can actually provide.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3c3ad838d85b" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>