The Book of Redgate: Do the Right Things

I do believe that Redgate has been very customer focused since it’s inception. I’ve worked with them in some capacity since 2002 and I’ve felt this along the way:

2026-02_0177

The next page has this statement:

We believe that if we do what is right for our customers then we will thrive.

I think that’s been true when we keep this in mind. The (relatively) few times we’ve started to do things for ourselves rather than thinking about customers, things haven’t worked out as well.

I think this sentiment is one that guides a lot of my life. Certainly inside Redgate, but also in the rest of my life. If I do what is best for another, or the world, often that works out well. It doesn’t mean I’m as efficient, profitable, less stressed, or anything else as I could be.

But I’m happier and I thrive.

I have a copy of the Book of Redgate from 2010. This was a book we produced internally about the company after 10 years in existence. At that time, I’d been there for about 3 years, and it was interesting to learn a some things about the company. This series of posts looks back at the Book of Redgate 15 years later.

Posted in Blog | Tagged , , | Leave a comment

Local Agents

Recently I saw an interesting article, saying that someone could build a general purpose coding agent in 131 lines of Python code. That’s a neat idea, though I’m not sure that this is better than just using Claude Code, especially as the agent still uses the online version of  the Claude model from Anthropic to generate code or perform other tasks. There’s a video in the article showing how this code can be used to perform some quick tasks on a computer.

However, the code isn’t specific to Anthropic. It can be used with any LLM, and I started doing just that, with a copy of the code from the article, but modified to use a local AI LLM running under Ollama. You can see my repo and feel free to download and play with it. It’s expecting a local LLM on 11434.

I’m a big fan of local agents for a variety of reasons, but mostly because I know humans tend to do dumb things. Especially with new technology, and maybe even more especially in development areas.

That includes me.

I’ll take shortcuts. I’ll give an agent sysadmin on a dev database to try things. I want to be able to experiment, learn, and see what works. I want to learn how to use tools and fail using them. That’s how I get better. That’s how I get better in sports, in music, and in technology.

And that’s not a project I can take time to work on. I don’t get to dedicate time to just learn and then go back to work. Work never ends. It’s a grinding, constant, continuous treadmill of things I need to deliver to others. I have to learn to experiment around those deliverables when I can find spare moments.

With AI, that means we’ll do things that get InfoSec teams to cringe. I get the concerns over data transiting networks and going to who-knows-where to be used who-knows-how-by-others. I appreciate business subscriptions that guarantee that data won’t be used, but I also want extra safeguards at times. That means local models. Not necessarily on my laptop, but in my data center.

Plus, that way I (or my org) can control the costs and manage expectations.

I hope local models and local agents catch on, I hope more vendors support them and more organizations are willing to run them. Even in something like AWS Bedrock or Azure Open AI or Vertex AI. Then I can rent the latest and greatest hardware, but have more control over how my organization uses it.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged , | Leave a comment

Every Database Has Problems

Every database platform has some strengths and weaknesses. Some more than others. I caught this site (NSFW) from Erik Darling, and it made my day. I was having a tough one when this site got me to smile and chuckle out loud a few times. I especially like the MySQL and SQL Lite links (again NSFW).

Every platform that you might choose to use to back an application can work in many situations. Certainly scale and load are factors to consider, but for the major relational database platforms, most will work fine for many applications. Some might work better than others, but there are always tradeoffs. There are pros and cons. This is also true for the major NoSQL platforms, though most of my experience is with relational ones, so I tend to lean in that direction.

At the same time, any platform can fail horribly.

What’s the difference? Quality database design and software engineering. If you have a knowledgeable staff that works with the platform, they can likely make it work well. If they don’t consider the database impact when they code, or aren’t skilled with that platform, they can easily make it seem like the database doesn’t work well at all. Lots of hardware can help, but it often can’t outrun poor data models, poor query structures, or a lack of indexing.

Quality of code matters, as many data professionals know. We often aren’t given enough time to do the job right, but we know that’s the case. It doesn’t do any good to complain or bemoan the fact that there is never enough time to fix things or improve them.

We need to write better code to start with, which means learning to write better code. Understand what impacts performance, where you can change your patterns and habits. Watch Erik’s posts, learn from Jeff how to build test data sets to stress your queries if you don’t have good test data. Learn to do a better job in the same amount of time.

Changing platforms won’t magically fix things, no matter what your CTO/director/manager thinks. Especially if your team doesn’t already have experience on the new platform.

Steve Jones

Listen to the podcast at Libsyn, Spotify, or iTunes.

Note, podcasts are only available for a limited time online.

Posted in Editorial | Tagged | Leave a comment

Get a Range of Sequence Values: #SQLNewBlogger

I discovered a procedure recently that I wasn’t aware of: sp_sequence_get_range. This post looks at how the proc works.

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

The Setup

I have a sequence object, IDCounter, that is an integer with an increment of 2. The next value that is returned is shown here:

2026-04_0249

The next value returned will be 99 (increment by 2).

However, imagine that I know I need 10 new values. I don’t want a loop to get these values. Instead, I want to move the sequence to 10 values ahead.

These ten values will be 99, 101, 103, 105, 107, 109, 111, 113, 115, 117. The current value should then be 117 if we get 10 rows.

Let’s use sys.sp_sequence_get_range to do this. I’ll use this code:

DECLARE @i SQL_VARIANT
EXEC sys.sp_sequence_get_range @sequence_name = N'dbo.IDCounter', @range_size = 10, @range_first_value = @i OUTPUT
SELECT @i AS RangeStart

I need to use a SQL_VARIANT as the output, though I can cast this to anything once I have the value.

When I run this code, notice the output.

2026-04_0251

Now if I check the metadata, I’ll see the current value below as 117.

2026-04_0252

There are client side applications that gather a bunch of data and know they need to insert xx rows. This helps them both update the sequence and also reserve these values for themselves. Of course, if the application fails, these values might be lost.

SQL New Blogger

A quick post. This took me about 5 minutes to test and about 10 minutes to structure a quick post on something I learned.

As a follow-up, I’ll use another post to show how this works in an application that reserves these values and then another application performs an insert.

You could easily do this on your blog and show some knowledge.

Posted in Blog | Tagged , , | Leave a comment