From the course: AI-Powered Software Development: Coding, Testing, and System Design
Automated test case generation
From the course: AI-Powered Software Development: Coding, Testing, and System Design
Automated test case generation
- [Instructor] All right, well so far in this course, we've been happily generating code and creating entire applications in only a few minutes. But the time has come to get a little bit more realistic. While it's definitely exciting to be able to generate pretty sophisticated applications with only a prompt or two like we did with our expense tracker application in the previous challenge, the fact is that if you wanna be able to use these tools for real world, very large, very complex applications, it's just not gonna be enough to have AI generate the code, and then as soon as we see it works, leave it. The fact is, and most developers have seen this at some point in their career, that without tests, without a reliable way of making sure that your program continues to do what it's supposed to do as we make changes, to make sure that it doesn't regress in certain areas, we have to add tests. We have to add unit tests, we have to add integration tests. Ideally, we'll also add end-to-end tests. And that's what we're gonna talk about in this section, starting in this video by seeing how do we generate a simple unit test with AI tools? Well, the good news here is that AI is generally just as good at writing tests as it is at writing production code. And while that might lead some people to wonder, "Why do we need to generate tests in the first place?" The fact is that tests are just as important when you're doing AI powered software development as they are when you're not using AI. In fact, many times they're more important because they make sure that your app doesn't turn into a mess, which it will do very quickly under the influence of AI. So what I have here is a nice little demo function, it's called is_palindrome. And the responsibility of this function is to tell us whether or not a string, such as "racecar" or "hello" is a palindrome. And by the way, a palindrome, in case you're not familiar with that term, is simply a word that's spelled the same backwards and forwards. So "racecar" is R-A-C-E-C-A-R, and it's R-A-C-E-C-A-R backward as well. Whereas "hello" is not. Anyway, this function works as far as we can tell, but the fact is that there is a good opportunity in the future that someone's gonna come in here, make some small change to this function, and that small change might make it work for their situation, but it might also break other situations. And without tests, we're never gonna know that. So what we're gonna do here is we're gonna use GitHub Copilot, and you could do the same thing with Cursor here, to generate tests for this code that we've already written. And there's a few ways you can do this. The easiest way is just to highlight that code. Press command + I, if you're using GitHub Copilot. And here, let me zoom in just a little bit so that you can read that a little bit better. And we're just gonna say, "Generate tests for this function using Pytest." Now this is another important aspect when you're generating tests is you wanna be very clear about what tools you're using to write those tests. Because in most programming languages, there are lots of different testing tools out there, and if you're not specific about it, then chances are pretty good that whatever tool you're using is just going to randomly select whatever happens to work best for that case. And in most cases, you're just gonna want to be consistent throughout your project. So if you wanna use Pytest and Python, you're gonna use that for pretty much everything. So let's hit enter here. And what this is gonna do is it's going to generate test code. And well, in this case, this is something that GitHub Copilot has just started doing recently, is it's gonna ask us to configure our test framework by basically just telling it which one we want to use. We've already said Pytest, but it just wants to make sure here. So we're gonna say Accept, and we're gonna click Pytest here. And then it's gonna ask us for the root directory. We're gonna click Root Directory. And that should be all we need there. And it seems like it's kind of frozen up there a little bit. Not sure what it's waiting for me to do. Let's just say, "How about those tests?" All right? So let's see what it does there. And sure enough, that strangely enough worked. That's the nice part about conversational AI, is you can just say things like that and that will usually work. And sure enough, we see that it creates tests for our program. So let's click Accept and just take a closer look at these tests and put that right there. Sure enough, what we've seen is that it actually goes ahead and uses this parameterize function that Pytest provides. All that this does is it just allows us to use the same test code essentially with different data, so that we don't have to write these things over and over again. And that's generally a pretty good idea in cases like this one, where the only thing that's gonna be different about our tests is the data. So we see that it's generated quite a few test cases, many of which highlight some sort of edge case. So we see that it's testing out whether or not it will still detect palindromes with different cases, which we want to happen. With spaces here. "A man a plan a canal Panama." If you take the spaces out, that is in fact a palindrome. And then it obviously has some false cases like "hello." It's got some edge cases here where, you know, do we want to consider an empty string a palindrome, or a string that only has a single space as a palindrome? One thing that I kind of wanna highlight mindset-wise is that while this looks great, the tests that were generated should really be sort of like a sanity check for you to see whether the AI that generated this code and you are on the same page as far as what you want this function to do. So again, when you look at things like this, this might be absolutely not what you want. You might want that to be false. You might wanna change that there. And this is just a great opportunity for you to make sure that you and the AI have the same idea about how the function or other code should behave. So that is the basics of generating simple test code using an AI tool. There's actually another way that you can do this. So I'm gonna just get rid of this and say Don't Save, and we will try this again. Just like how we saw that there is a /doc as well as a /fix command in GitHub Copilot, many tools also have a hidden tool here for tests, and GitHub Copilot is no different there. If you say /tests, and you can obviously specify there "Use Pytest," and that will have the same basic effect as what we just saw earlier. So let's just say Accept. We're going to save this file 'cause it didn't actually create a new file here. And we're just gonna call this something like "test_simple_tests.py." And we'll click Save there. And now we have that test file there, and it looks very, very similar. Actually, it's not identical because it doesn't have the string with just a single space. But anyway, it's very similar to what we had before.
Contents
-
-
-
Automated test case generation7m 24s
-
(Locked)
AI-driven unit and integration testing6m 47s
-
(Locked)
Performance and security testing7m 3s
-
(Locked)
What does TDD look like with generative AI?7m 33s
-
(Locked)
Challenge: Adding tests to an application1m 18s
-
(Locked)
Solution: Adding tests to an application5m 29s
-
-
-
-
-