Тёмный

Your Tests Are Failing YOU! 

Continuous Delivery
Подписаться 203 тыс.
Просмотров 8 тыс.
50% 1

Is there anything more frustrating to a Software Engineer than intermittent tests (flaky tests). We spend a lot of time, effort and money writing tests that ensure our code is working. So when our tests are not giving us definitive answers it can be a real problem for software development teams.
In this episode, Trisha Gee explains how you can avoid/fix flaky intermittent tests to help us build better software faster.
-
📙 Get Dave's FREE Guide to Eliminating Intermittent Tests here ➡️ www.subscribepage.com/fix-fla...
-
⭐ PATREON:
Join the Continuous Delivery community and access extra perks & content! ➡️ bit.ly/ContinuousDeliveryPatreon
-
👕 T-SHIRTS:
A fan of the T-shirts I wear in my videos? Grab your own, at reduced prices EXCLUSIVE TO CONTINUOUS DELIVERY FOLLOWERS! Get money off the already reasonably priced t-shirts!
🔗 Check out their collection HERE: ➡️ bit.ly/3vTkWy3
🚨 DON'T FORGET TO USE THIS DISCOUNT CODE: ContinuousDelivery
-
🖇 LINKS:
Trisha's Website ➡️ trishagee.com/
Trisha On Twitter ➡️ / trisha_gee
Seven Reasons You Should Not Ignore Flaky Tests ➡️ gradle.com/blog/seven-reasons...
3 Key Elements to Incorporate into Your Flaky Test Remediation Approach ➡️ gradle.com/blog/3-key-element...
More from me on slow/flaky tests! ➡️ trishagee.com/presentations/a...
-
BOOKS:
📖 Getting to Know IntelliJ IDEA ➡️ trishagee.com/getting-to-know...
📖 97 Things Every Java Programmer Should Know ➡️ www.oreilly.com/library/view/...
📖 Dave’s NEW BOOK "Modern Software Engineering" is available as paperback, or kindle here ➡️ amzn.to/3DwdwT3
and NOW as an AUDIOBOOK available on iTunes, Amazon and Audible.
📖 The original, award-winning "Continuous Delivery" book by Dave Farley and Jez Humble ➡️ amzn.to/2WxRYmx
📖 "Continuous Delivery Pipelines" by Dave Farley
Paperback ➡️ amzn.to/3gIULlA
ebook version ➡️ leanpub.com/cd-pipelines
NOTE: If you click on one of the Amazon Affiliate links and buy the book, Continuous Delivery Ltd. will get a small fee for the recommendation with NO increase in cost to you.
-
CHANNEL SPONSORS:
Equal Experts is a product software development consultancy with a network of over 1,000 experienced technology consultants globally. They increase the pace of innovation by using modern software engineering practices that embrace Continuous Delivery, Security, and Operability from the outset ➡️ bit.ly/3ASy8n0
TransFICC provides low-latency connectivity, automated trading workflows and e-trading systems for Fixed Income and Derivatives. TransFICC resolves the issue of market fragmentation by providing banks and asset managers with a unified low-latency, robust and scalable API, which provides connectivity to multiple trading venues while supporting numerous complex workflows across asset classes such as Rates and Credit Bonds, Repos, Mortgage-Backed Securities and Interest Rate Swaps ➡️ transficc.com
#developer #softwareengineering #softwaretesting

Наука

Опубликовано:

 

14 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 40   
@ContinuousDelivery
@ContinuousDelivery Месяц назад
FREE HOW TO GUIDE - How To Eliminate Intermittent Tests! This pulls together some top tips on how to address the root causes of flaky tests, and how to get test results you can trust. Download your copy HERE ➡ www.subscribepage.com/fix-flaky-tests
@justinbehnke8724
@justinbehnke8724 Месяц назад
I enjoyed the point about how it’s a slippery slope that impacts morale. I’m in a unique position of largely working alone on a repo, I have a suite of tdd tests, and i subject each new class to mutation testing, then i have a mostly happy path e2e test suite. I work alone and that’s hard on my own morale (not sure if that makes sense), but i find solace in knowing that my work is the best i can do.
@christopherdunderdale7238
@christopherdunderdale7238 Месяц назад
Love this - Not just a rant, but some useful suggestions on going about this as a team
@brownhorsesoftware3605
@brownhorsesoftware3605 Месяц назад
I am against ignoring "flakey" tests. I can tell you a horror story about what it is like to have 3 days to fix a timing problem that people had ignored for 3 months. A "flakey" test can be a harbinger. I always investigate them. But I am also against a backlog with more than zero items. So there's that.
@Rcls01
@Rcls01 Месяц назад
I worked in an org with a legacy app that had 10y worth of baggage, a big ball of mud, and huge e2e tests that took hours to run and were green only two times in my three years there. Yeah.. I left. To most that was the norm. For me it was a fundamental issue that I couldn't fix.
@calkelpdiver
@calkelpdiver Месяц назад
Good talk. I liked the point about isolating the "flaky" tests away on their own and then either working on them or getting of them (delete or rewrite from scratch). As part of this as you note is that you have to look at dependencies of the script/test. This can related to run order (one test depends on another) or data (test either needs data in a certain condition or seeded in to start or changed by another script/test) or a combination of the two. And one thing in addition to this is dependency on an outside service (API). There can also be latency issues to deal with too. Running a boat load of tests in parallel could artificially create a load on the system that causes latency or data conflicts (locked row or table in a database). It comes down to what you said about building in time to go over things and try to resolve the reason for the flaky test.
@TrishaGee
@TrishaGee Месяц назад
Yes - we found flakiness in the tests in the place where Dave and I worked together, and one cause turned out to be a race condition that we only saw under the load we had in the test environment. This was a genuine bug, but not one that was seen in less busy environments. These things are hard to find!
@calkelpdiver
@calkelpdiver Месяц назад
@@TrishaGee, Oh I know. I've been working in Software Testing for 36 years now, 32+ with automation in various forms and some Performance Test work as well. Sometimes a flaky test actually points to a real issue in the system under very specific circumstances.
@mineralisk
@mineralisk Месяц назад
Some CI systems have automatic detection of flaky test, which is really helpful.
@KeeperKen30
@KeeperKen30 Месяц назад
Flaky tests are usually a sign of an enterprise that has tests simply to say they have tests, not to act as a solid form of quality testing. If unit/integration tests aren't actually testing the system, they are worse than useless as you spent time on something with zero benefit. In my opinion, unit tests for database code should hit an actual database (clean instance with control data is best). Mock crossing boundaries (logic tests can mock database and validations, rest controller tests can mock business logic). Once you pay the cost of standing up a development environment where unit and integration tests are as close to the real thing as possible it actually becomes difficult to introduce new defects as those tests, if done correctly, will find them before you deploy.
@user-po8el8ok2c
@user-po8el8ok2c Месяц назад
Spot on Trish, more videos like this please.
@retagainez
@retagainez Месяц назад
I know this video seems to discuss intermittently failing tests (perhaps due to some asynchronous work) without changing the code, but I also think of flaky tests as tests that fail whenever you make any code change, too. The follow-up questions in fixing tests that fail unpredictably might be along the lines of "are we testing correctly," or "is our code poorly designed," or "is the test poorly designed?" I'd really like a book that really goes over these kinds of intuitions in a more dedicated manner. Perhaps like a testing anti-patterns book.
@tzadiko
@tzadiko Месяц назад
That's not a flaky test, it's a brittle test... totally different issue. Brittle tests are often a symptom of poor design but sometimes it's just a badly written test
@retagainez
@retagainez Месяц назад
@@tzadiko Right, the term escaped me in the moment. That's what interests me
@banatibor83
@banatibor83 Месяц назад
Our tests are pretty stable, but when one becomes flaky, we disable them and immediately open a bug for them in JIRA.
@johanblumenberg
@johanblumenberg Месяц назад
The term "flaky test" is the most misleading term I know. It very easily becomes an umbrella term for anything that would cause a test to fail sometimes. The first thing it tells me is that it is a problem with the test, not the product. You need to fix the flaky test. Can I release it? Of course, it's just the test which us flaky. The second thing is that once you label it as a flaky test, it is the resoonsibility of the test team to solve the problem. Now, if you give the test team the task to fix the flaky tests, what will they do? Fix the tests. Regardless if it is a bug in the product or not, they will make the tests go green. I prefer to simply refer to it as a bug, until I know what the root cause is.
@tzadiko
@tzadiko Месяц назад
Wow did not expect this to get so deep right off the gate. Why are we all here?
@SteinGauslaaStrindhaug
@SteinGauslaaStrindhaug Месяц назад
I get that some kinds of "tests" that involve multiple systems over the network is inherently not completely deterministic. But even those should generally only fail whenever theres a major network issue, or a server is down or something like that happens. If it keeps randomly happening 50% of the time the test is run because of timeouts etc. that means theres something fundamentally wrong with your system and not the test that is "flaky". Test scripts are generally more patient than end users, so if even the test script times out waiting for responses (maybe because of a hard maximum response time limit imposed by some other system); this must be a serious bug that is also _constantly_ annoying your users. If not; then the test is probably testing an unused part of the system!
@SteinGauslaaStrindhaug
@SteinGauslaaStrindhaug Месяц назад
And if the test is testing a part of the system that _is_ in fact used but there's no actual issues in production or test servers, _only_ when the test runs. How the frick did you manage to make the test itself nondeterministic if the system it tests is actually stable!? Are people writing tests with deliberate calls to random() somehow? That would be a "flaky test" (and very pointless test); if the flakyness comes from the system being tested then it's not really a flaky _test_ but a flaky system; which really should be fixed immediately.
@hiftu
@hiftu Месяц назад
We use cloud to inject more environment issues into our CI/CD environment. This is not to tests for stability. We (khhm, the management decision to adopt cloud) just screwed ourselves. Now we are happy to use this much less effective way of working, where retriggering a job is commonplace. Viva 'la Cloud!
@FlyingOctopusKite
@FlyingOctopusKite Месяц назад
So many flaky tests 😢. Web is so prone to flaky tests, especially spas as it is hard for test runners to know when the page is actually ready to validate.
@douglascodes
@douglascodes Месяц назад
Preach!! 🙏
@richardhood3143
@richardhood3143 Месяц назад
I recommend deleting the test. The cost to investigate and fix the flaky test is more expensive than writing a good test for area that is missing good test coverage.
@NachtmahrNebenan
@NachtmahrNebenan 29 дней назад
I don't have flaky tests, … because I'm the only one who writes tests at all! 😭
@Zeioth
@Zeioth Месяц назад
I ran into this when I was writing a test for a logger service: I wanted to assert that indeed it was writing into the log file. But because our tests run async, and other parts of the code would also make use of the logger, I started to get intermittent OK. Then I realized: We can mock the actual logger.log(), and trust the provider of the package. Then we can focus on asserting the parts of the code that are actually our responsibility. For example, we could test if the logger singleton is actually returning a valid logger object.
@Aleks-fp1kq
@Aleks-fp1kq Месяц назад
How did you mock the actual logger?
@TheEvertw
@TheEvertw Месяц назад
Flaky tests indicate something is wrong. Either in the test case, the test framework, or in the system under test. You MUST find out which one of the three it is, and fix the problem. Because even if you think you know why the test is failing, that issue can hide an actual issue.
@brucedavis9191
@brucedavis9191 Месяц назад
Great content, Trisha. I would argue that flaky tests actually provide significant negative value since they sow confusion while also slowing down the build. Thanks for all of the advice on how to mitigate the damage.
@michaelslattery3050
@michaelslattery3050 Месяц назад
IMO, flaky tests should be rare. E2E tests should not be directly driven by accessing html fields, the database should be re-built via migrations, and external rest services should be mocked. Instead of accessing html fields, we access a javascript API (such as a client-side reactive store). That said, "smoke" tests can be flaky, but there should be very few such tests. They exist to make sure everything works together, but shouldn't really test logic.
@mrpocock
@mrpocock Месяц назад
Flaky tests are usually a sign that things are very badly wrong. They also are often a symptom of something being wrong with the design.
@PaulSebastianM
@PaulSebastianM Месяц назад
Exactly. It's a sign, no, rather a proof that your system does not actually enforce its own business rules. 😂
@Aleks-fp1kq
@Aleks-fp1kq Месяц назад
Oh, isn't bad design the root cause of all evils 😊?
@vicaya
@vicaya Месяц назад
Flaky tests are perfect for training your test AI to root cause test failures :)
@queenstownswords
@queenstownswords Месяц назад
Removing a test without traceability is WRONG. If there is a flaky test, file a bug. The bug fix may include removing the test but there is a traceable reason for removing the test. Bugs will get attention and will likely be 'fixed'. When determining if the test is flaky, add a % to the bug; hence, you have a fail frequency like 1/3 or fails 1 out of 3 times. Prevent running failing tests with a 'Sealed Loop' approach. There... done.
@ZajoSTi
@ZajoSTi Месяц назад
Flaky tests signals that the tests were designed very poorly. Those that are inherently flaky should raise a suspicion. Are those tests even worth it in the first place? They have no return of investment, nor value. I have seen this so many times. Just because it is "only" tests, that means they can be engineered poorly (if at all). Because of it they will be flaky and people will loose the faith and interest in test suite.
@ErazerPT
@ErazerPT Месяц назад
Sorry, but i see this more of failure to address "real world". When tests were made pass/fail it left out the very real world third option of "didn't finish because {insert reason here}". Because the real world IS flaky, and you need to deal with it, not just "happy paths". If i get back "network was down so i 'failed'", that's perfectly reasonable and tells me to go to look at the network before further testing. Also tells me that IF the network was REALLY down, my component handled it as it should. A "real fail" is some component x just failing without giving me a reason WHY.
@marshalsea000
@marshalsea000 Месяц назад
Seeing a lot of rubbish here. Flakey Tests are an indicator something is wrong - it MAY be the test, it MAY be the code under test, it could be non SOLID code that is highly untestable bilge, or that the code is referring to the same test data. Unfortunately in the new era of low quality coders who don't understand the basics and are suddenly senior this is only going to get worse and worse.
@TheEvertw
@TheEvertw Месяц назад
"Inherently flaky tests" That is a VERY small minority. Even waiting for external resources can be made deterministic, by waiting until the system is in a predefined state before running the actual test. I have spent years developing distributed systems, and flaky tests were NOT ACCEPTABLE to us. One major cause of flaky tests is not having the shutdown procedures properly implemented. That means that tests can not clean up after themselves properly. Make sure your shutdown procedure is bullet-proof, and tests where processes are started and stopped as part of the test will run flawlessly. For instance, if you terminate a thread, use `join` to wait until it is _actually_ terminated.
Далее
TDD Is A BROKEN Practice
17:14
Просмотров 28 тыс.
Where Agile Gets It Wrong
19:22
Просмотров 29 тыс.
Best tutorial💞🤗🕺🏻 #tiktok
00:11
Просмотров 179 тыс.
3 wheeler new bike fitting
00:19
Просмотров 14 млн
The WORST Way to Develop Software
15:16
Просмотров 20 тыс.
120x Faster Algorithm By Nested Loops
21:21
Просмотров 333 тыс.
Why I Quit the Scrum Alliance
7:58
Просмотров 7 тыс.
TDD Isn't Hard, It's Something Else...
16:22
Просмотров 10 тыс.
Don’t Do E2E Testing!
17:59
Просмотров 150 тыс.
How To Use TDD For UI Design
13:08
Просмотров 16 тыс.
The Most Important Programming Invention In 20 Years
15:53
5 Good Python Habits
17:35
Просмотров 366 тыс.
keren sih #iphone #apple
0:16
Просмотров 1,5 млн