Boas práticas de programação fazem bem para as pesquisas quantitativas... e, por acaso, para a prática clínica:
https://blog.leonardof.med.br/2025/ckd-epi-sbn.html
Fui usar "teste de unidade" ("unit testing", em inglês) no código de análise estatística que eu estava escrevendo, e descobri que uma calculadora online da Sociedade Brasileira de Nefrologia estava errada.
I've just released #cmocka version 1.1.8:
* Set CMOCKA_LIBRARIES in package config for backwards compatibility
* Improve c_strreplace implementation
* Sanitize XML strings
* Update check for uintptr_t
* Require cmake >= 3.10
My next article on #Rust #UnitTesting is out:
https://jorgeortiz.dev/posts/rust_unit_testing_simplify_tests/
In this one I share some tips on simplifying the tests code.
Spread the word
Automated tests in my survival mode quest plugin :D
Now I just have to write more of em
https://github.com/sammypanda/MCJE-PlayerQuests-Plugin/pull/169
@encthenet Oh darn, yeah that's mildly annoying of numpy. But there's still the regular temporary file handling. Who knows, working with in-memory fake files may not even make a significant difference if you could do it, compared to temporary files if they live on, say, a shared memory partition.
That won't work. It says so in the docs:
> pyfakefs will not work with Python libraries that use C libraries to access the file system.
Which is what numpy is doing.
@encthenet @rachelplusplus Dunno if you've considered this, or would consider it, but pytest has a plugin, pytest-fakefs (https://github.com/pytest-dev/pyfakefs), that implements an in-memory filesystem complete with file objects that you can use to test file handling code without having to put things on disk. IMO the plugin ecosystem for handling things like this is one of pytest's biggest advantages over unittest.
Or if you prefer, pytest also has built-in functionality that makes working with temporary files pretty easy.
A couple weeks ago, I gave a talk at @omt_conf on What's New in Testing. That talk was recorded, but while I wait for it to be edited and published, I published an edited (and updated!) version of my speaker notes from that talk.
There's a lot new in testing since last year. I'm still surprised there wasn't a WWDC video about all the new things you can do.
https://rachelbrindle.com/2025/06/26/whats-new-in-testing-swift-6-2/
In fast-paced SaaS dev, unit tests aren't just insurance— they're leverage. Run them constantly. Automate the grind. Let tests catch regressions before your users do.” #SaaSDev #UnitTesting #BuildInPublic
I made a small tool called SpecSCAD to help write unit tests for #OpenSCAD functions using a #BDD-style syntax (describe, it, expect), inspired by Mocha/Jest. #UnitTesting
It runs @OpenSCAD in headless mode via Bash and outputs simple pass/fail results. No external dependencies beyond OpenSCAD + Bash.
It’s very lightweight, but can help to catch issues early in function-heavy code. Maybe it’s useful to others too — feedback welcome!
My talk at OneMoreThing 2024 on #UnitTesting #SwiftUI and #SwiftConcurrency was recorded, but has yet to be edited and uploaded.
Last night, I published an edited form of my speaker notes from that talk to my blog.
https://blog.rachelbrindle.com/2025/06/12/testing-swiftui-and-swiftconcurrency/
For a long time, when I first started writing tests, I felt so unproductive writing tests. I would try to write the test as fast as possible so I could move on to the "real" code.
Then one day when a production deployment failed due to missing a simple test I realized the critical value of tests... https://www.darrenmcleod.com/2025/06/test-code.html
Sometimes you gotta sherlock yourself. I think that Swift Testing should have something akin to Nimble's Polling Expectations.
https://forums.swift.org/t/pitch-polling-expectations/79866
Also, wow, macros are hard. I am so glad I don't have to write them on a day-to-day basis.
falsify
A few days ago, Edsko de Vries of Well-Typed published an in-depth article on property-based software testing, with a focus on the concept of “shrinking.”
In brief, property-based testing is sort-of like fuzz testing but for algorithms and protocols. Like fuzz testing, random test cases are procedurally generated, but unlike fuzz testing, the test cases are carefully designed to verify whether a software implementation of an algorithm satisfies a specific property of that algorithm, such as:
n*log(n)
number of iterations for input dataset of size n
““the sequence of log messages is guaranteed to obey this rules of this particular finite-state automata: (connect | fail) -> (send X | fail) -> (receive Y | receive Z | fail) -> success .”
Shrinking is the process of simplifying a failed test case. If you have found some input that makes your function return a value when it should have thrown an exception, or produce a result that does not satisfy some predicate, then that input is a “counterexample” to your assertion about the properties of that function. And you may want to be able to “shrink” that counterexample input to see if you can cause the function to behave incorrectly again but with a simpler input. The “QuickCheck“ library provides a variety of useful tools to let you define property tests with shrinking.
Defining unit tests with such incredible rigor takes quite a lot of time and effort, so you would probably do not want to use property-based testing for your ordinary, every-day software engineering. If you are, for example, being scrutinized by the US Department of Government of Efficiency, you would likely be fired if you were to take so much time to write such high-quality software with such a strong guarantee of correctness.
But if you are, for example, designing a communication protocol that will be used in critical infrastructure for the next 10 or 20 years and you want to make sure the reference implementation of your protocol is without contradictions, or if you are implementing an algorithm where the mathematical properties of the algorithm fall within some proven parameters (e.g. computational complexity), property-based testing can give you a much higher degree of confidence in the correctness of your algorithm or protocol specification.
Neat! #Duende is sponsoring #dotnet project Shouldy for the next year for $3,000. #unittesting #xunit #nunit #mstest
https://blog.duendesoftware.com/posts/20250415-shouldly-assertion-framework/
The testing attachments proposal has been accepted (with modifications)! To be attached to some upcoming swift version! https://forums.swift.org/t/accepted-with-modifications-st-0009-attachments/79193
No real negative feedback, either. I guess people quickly became attached to the idea.
There were some disagreements about naming in the review, but I’m glad that we resolved that amicably. I’d really hate it if we were too attached to an idea to come to an agreement.
Adding Attachments to Swift Testing is up for review! It's my first time being a review manager.
Forum thread: https://forums.swift.org/t/st-0009-attachments/78698
Proposal: https://github.com/swiftlang/swift-evolution/blob/main/proposals/testing/0009-attachments.md