The 5 groups of people involved in IT hypes

Lately AI is all over the place. It’s THE IT hype of today. As was bigdata, cloud, microservices, NoSQL, etc. Every one of those hypes has really good use cases and brings something new to our toolbox. On the other hand it’s a complete overkill for 80% of usages they are currently trying to force it upon.

That brought me to think about who drives what. At the end I came to the conclusion that those folks can be divided into roughly 5 different groups. Of course I slightly exaggerate some arguments to make them clearer.

1: The ‘Analysts’

No, I’m not talking about IT analysts but venture capital analysts! Let’s face it: in the western world we do a lot less own production nowadays (those mostly got moved to cheap labour countries) but instead have a lot of jobs in the service sector and financial market, etc. Part of those could also be called ‘unproductive jobs’. IT – partly – falls into this category. Not because what we programmers do isn’t worth anything, but because we are often a cue ball in the game of big $$$ investment plans. Every new IT hype is fuelling the blue chip stock market. Naturally they are interested in pushing it even further to increase their share prices. And if one hype is ‘milked’ they move on pushing another hype to boost their stock shares.

And the big consulting companies (Gartner, etc), IT magazines, et al want to make good cheap money themselves. So they are helping them by recommending technology (which they not even closely understand themselves) down to the next layer, which is:

2: The CTOs and ‘Bosses’

When you have dozens of Articles about a new IT hype all over the place and other CTOs raving about them in the golf club there is barely a CTO which will freely admit “Nah, I just wait until I learn more about it, because I really don’t understand it yet”. What they most likely will do is to go to their own management and say “Mr CEO or technical manager, look at that AI stuff, we need it too, because our competitor uses it as well and will overtake us!”. Do they have a clue what they really want and why the want it? Hell, surely (in almost all cases) not!

Of course most of the time there are often good use cases. And if you have a very good technical management team (or down the line someone who they listen to) they will try to implement those use cases first. But it’s not guaranteed. Actually most times it’s just bonkers what they try to achieve with this new hype. How many companies really did have a need for big data yet still many did have an Apache Hadoop installation. Don’t get me wrong: Apache Hadoop is STILL extremely good IF you have the right use case for it – but for most companies it was really bonkers and overkill!

3: The Copy-Coders

So most times these totally out-of-nowhere requirements are pushed down to the development team. And as we all know there is a big spread in quality of developers. Of course there are developers who should imo not touch productive code, but most are really ok-ish. Most of the time it’s the environment they work in which makes them copy-coders. Or as I used to call them: Stackoverflow-Programmers. Btw, those are right now moving from being Stackoverflow-Copy-Coders to “Vibe-Programmers”. That means: use ChatGPT, Claude, Copilot, etc to let it suggest code which you then take or adopt based on your gut feeling. In both cases the resulting quality is näääh not so good. With Stackoverflow you at least get a really good explanation along the solution most times. Of course someone must have the time and will to understand this explanation, but this is now missing with AI-assisted coding altogether.

Usually this leads to a working solution (after a ton of try-and-error iterations). The downside is that the source code now does not only contain the solution but tons of additional code blocks and logic which got just copied over as well but is actually useless. In the best case it only makes maintenance much harder. But more often than that this superfluous code introduces problematic paths and side effects. Yes, it’s a quick way to come to decently working solution, but be aware that it also introduces technical debt as explained by Ward Cunningham (one of the big IT superheroes, read up on his inventions) who coined that term: https://www.youtube.com/watch?v=pqeJFYwnkjE

4: The Informed Programmer

There are always a few people in a company who try to not only get their tasks done but to truly understand what they are doing. They want to understand the ‘tricks’ behind the APIs they are using. Those are the seniors in the team (regardless of their biological age, I’ve seen excellent ‘seniors’ (and ‘seniorinas’!) which are in their 20s!). What sets them apart is that they really want to understand what they are doing.

Now I have to step back a bit and try to explain what I try to bring across whenever I do some education, be it a conference talk, a lecture or just explaining to some fellow co-workers:
Imagine seeing a trick shown by a magician. If you see it first it really IS magic – often seemingly defeating physics. The same happens on the the 2nd and further time you see. It IS magic! But once the trick gets explained to you then you can not unsee it any more! All the magic is gone, you now KNOW how it is done! You went from “known unknown” to “known known” (see also: Rumsfeld Matrix)

What most programmers do when using some random API is like they are watching a magician do his tricks. They do roughly know how to use it, what to put into and what they get out – but they do not understand what it really does. It’s magic to them! Or as physicicist Arthur Clarke put it:

“Any sufficiently advanced technology is indistinguishable from magic”

Once you understand the trick you can most times judge what side effects and limitations some new technology has. And when to NOT use it as well 🙂

5: The Implementors

I mention this just for completeness. There are people who really invent and implement those algorithms, libraries and frameworks. Of course they do know how this stuff works – and they are also well aware what they do NOT yet cover or not yet understand themselves. And that’s a big part. To bring another quote, this time from one of the first computer scientists David Wheeler:

“All problems in computer science can be solved by another level of indirection.”

In fact our high level computer languages are just a pile of hundreds of layers of abstractions built upon another. From APIs, the actual implementation down to machine instruction, transistors or FETs down to semiconductor physics and atoms, electrons, quarks and gluons. And if you implement fundamentally new frameworks you have to go down these abstractions at least to some degree. And this is obviously complex. Or to put it in my own words (as a person who wrote quite a few specs and libraries in the JavaEE area):

“Behind every problem detail you have to solve
there are at least 3 more problems hiding
(which you also have to solve).”

Which of course a stock market analyst doesn’t care much about…

Measuring the input impedance of a voltmeter

What is the input impedance?

An ideal voltmeter just measures the voltage without consuming any current from the observed circuit. But we live in the real world, our voltmeters have in internal resistance. For a modern multimeter this ist most times in the range of 10 MΩ or higher. In the equivalent circuit diagram this is a resistor parallel to the voltmeter.

Why do we need to know this?

Sometimes you have to measure circuits with very small currents involved. E.g. if you want to measure leakage currents of a diode, a fet or an integrated circuit like a CD4052. Or the leakage current of a capacitor!

For ‘higher’ currents you might simply use your Multimeter on the µA range – if you happen to have such a range. My Fluke87 can measure down to .1 µA, my Peaktech 3360 down to 10nA. That sounds like good enough, but in practice one can measure further down with little effort by using a high ohm shunt resistor.

Image

In this example we use a 10 Meg Ω resistor. If we assume a leakage current of 1nA we get 10M * 1nA = 10mV. Most Multimeters can measure down to 1mV, some 10µV. My old HP desktop multimeter can measure down to 100nV.

But why do I need to know the input impedance of my voltmeter? Because it basically is forming a parallel resistance to R1. For a standard 10 Meg input impedance in the end we’d only get 5MΩ of effective resistance. Means our voltage drop will only be 5mV instead of 10mV.

How to easily measure the input impedance of a voltmeter?

All we need is a fixed voltage and a resistor. The voltage could come from a lab power supply or simply a battery. The absolute voltage doesn’t even matter. An even number just makes rough calculations easier. The resistance should be rather high. 10M or 20MΩ does work fine.

Image

If we look at the circuit we see the external resistor is forming a simple voltage divider together with the input impedance of our voltmeter. All we need to do is to look at the voltage on our multimeter and do a bit of math. The measured voltage Um depends on the current times impedance.

Image

Moving it around for R2 we get

Image

Now let’s test out a few of my multimeters!

I’ve got a 20MΩ resistor

Image

And I dial in 10V.

Image

When adding the 20MΩ (measured 20.438M) resistor I get 3.313V in the 4.5 digit mode and 3.513V in the 3.5 digit mode. Which gives us 10.12 and 11.07 MΩ respectively.

Image

The Peaktech scores similar values: 3.5219V, meaning 11.1 MΩ.

My Lab benchtop multimeter is another category though. While it has similar values (3.2876V) in the 10V range, seemingly weird things happen in the 3V range

The HP 3478A high-Z mode

For moving to the 3V range on my HP3478A I use my self built 3.000V reference (utilising a MAX6071AAUT30+T). It shows 2.99962V without the resistor.

Image

But when adding the 20.4MΩ resistor I get weird reedings. The value is not really fixed but moves around from 2.99726 to 2.99742. This has to do with the internal multi-slope sampling on such a high impedance voltage source. Note that we now start to pick up noise in the cables. The change is in the microvolts range after all.

Doing the math again we get an internal resistance of 26GΩ – yes gigaohms!

When operating in the 3V range we basically get a direct connection to the internal FET stage. And note that – with every direct differential amplifier stage – we better don’t take it as linear resistance but like an OpAmp with a input current. And also note that this can be either positive OR negative – means maybe there is current coming OUT of the wires! Let’s check what we got here:

We simply take a 1µF capacitor with a low leakage current and measure it’s voltage. First we start with shorting out the capacitor. In this stage we measure a micro volt. When we open the short it quickly starts to rise..

Measuring the voltage of the 20.4MΩ resistor gives us values between 120 and 900 µV. Which means we have an input current of around -10 to -50 pA.

Image

That’s pretty fine. But something to be aware when measuring leakage currents!

Design a site like this with WordPress.com
Get started