Browse Source

Merge branch 'master' of git.juliobiason.net:/home/slowsloth/git.juliobiason.net/git/blog

master
Julio Biason 6 years ago
parent
commit
4255ea9f5f
  1. 10
      config.toml
  2. 5
      content/books/_index.md
  3. 22
      content/books/things-i-learnt/_index.md
  4. 42
      content/books/things-i-learnt/disclaimer/index.md
  5. 37
      content/books/things-i-learnt/document-is-contract/index.md
  6. 28
      content/books/things-i-learnt/document-it/index.md
  7. 26
      content/books/things-i-learnt/future-trashing/index.md
  8. 54
      content/books/things-i-learnt/gherkin/index.md
  9. 69
      content/books/things-i-learnt/integration-tests/index.md
  10. 52
      content/books/things-i-learnt/intro/index.md
  11. 25
      content/books/things-i-learnt/languages-tests/index.md
  12. 40
      content/books/things-i-learnt/spec-first/index.md
  13. 58
      content/books/things-i-learnt/steps-as-comments/index.md
  14. 45
      content/books/things-i-learnt/tests-apis/index.md
  15. 59
      content/books/things-i-learnt/tests-dead-code/index.md
  16. 37
      content/books/things-i-learnt/tests-in-the-command-line/index.md
  17. 38
      content/books/things-i-learnt/throw-away/index.md
  18. 42
      content/thoughts/things-i-learnt-the-hard-way-the-book.md
  19. 1030
      content/thoughts/things-i-learnt-the-hard-way.md
  20. 2
      themes/nighttime

10
config.toml

@ -16,6 +16,8 @@ taxonomies = [
{name = "tags", rss = true},
]
generate_rss = true
# Whether to do syntax highlighting
# Theme can be customised by setting the `highlight_theme` variable to a theme supported by Zola
highlight_code = true
@ -32,11 +34,3 @@ after_dark_menu = [
{url = "$BASE_URL/books", name = "Books"},
]
after_dark_title = "JulioBiason.Net 4.0"
hyde_links = [
{name = "Category: Book Reviews", url = "/reviews/books"},
{name = "Category: Code", url = "/code"},
{name = "Category: Reviews", url = "/reviews"},
{name = "Tags", url = "/tags"},
]
hyde_reverse = true

5
content/books/_index.md

@ -1,8 +1,13 @@
+++
title = "My Books"
template = "section-contentless.html"
transparent = true
+++
## Portuguese/Português
* [Uma Lição de Vim](uma-licao-de-vim)
## English/Inglês
* [Things I Learnt The Hard Way](things-i-learnt)

22
content/books/things-i-learnt/_index.md

@ -0,0 +1,22 @@
+++
transparent = true
title = "Things I Learnt The Hard Way (In 30 Years of Software Development)"
template = "section-contentless.html"
+++
* [Intro](intro)
* [Disclaimer](disclaimer)
* Programming:
* [Spec First, Then Code](spec-first)
* [Write Steps as Comments](steps-as-comments)
* [Gherkin Is Your Friend to Understand Expectations](gherkin)
* [Unit Tests Are Good, Integration Tests Are Gooder](integration-tests)
* [Testing Every Function Creates Dead Code](tests-dead-code)
* [Tests Make Better APIs](tests-apis)
* [Make Tests That You Know How To Run on the Command line](tests-in-the-command-line)
* [Be Ready To Throw Your Code Away](throw-away)
* [Good Languages Come With Tests](languages-tests)
* [Future Thinking Is Future Trashing](future-trashing)
* [Documentation Is a Love Letter To Your Future Self](document-it)
* [The Function Documentation Is Its Contract](document-is-contract)

42
content/books/things-i-learnt/disclaimer/index.md

@ -0,0 +1,42 @@
+++
title = "Things I Learnt The Hard Way - Disclaimer"
date = 2019-06-19
[taxonomies]
tags = ["en-au", "books", "things i learnt", "disclaimer"]
+++
There is one magical thing you need to know when reading this book: It's all
personal opinion
<!-- more -->
A lot of stuff I'm going to discuss throughout this book will come directly
from my personal experience in several projects -- system applications, web
backend, embedded, mobile, stream processing -- in several different languages
-- C, C++, Python, Java. And, because it comes from personal experience,
everything reflects my own personal opinion on several subjects.
Obviously, you don't need to agree with every single point.
Also, sometimes I may mention some examples that people who know me -- either
worked with me, heard me complain about some project, inherit one of my
projects, _I_ inherit one of the _their_ projects -- may recognized and think
I'm attacking the author.
I am not.
We do mistakes. Sometimes we don't know the topic with are attacking,
sometimes we don't have full specs, sometimes we don't have the time to write
things properly in a crunchtime. And that's why some things don't look as
pretty as they should. Heck, if you think I'm attacking the original author of
some example, look back the stuff I wrote and you'll see things a lot worse.
But I need the example. I want to show people how things can be better. I want
to show people how my opinion built over some subject. And, again, I'm in no
way attacking the original author of the code. I may even call the code
"stupid", but I'm not calling the author _stupid_.
With that in mind...
{{ chapters(prev_chapter_link="/books/things-i-learnt/intro", prev_chapter_title = "Intro", next_chapter_link="/books/things-i-learnt/spec-first", next_chapter_title="Spec First, The Code") }}

37
content/books/things-i-learnt/document-is-contract/index.md

@ -0,0 +1,37 @@
+++
title = "Things I Learnt The Hard Way - The Function Documentation Is Its Contract"
date = 2019-06-21
[taxonomies]
tags = ["en-au", "books", "things i learnt", "documentation", "contracts"]
+++
When you start the code by [writing the
documentation](/books/things-i-learnt/steps-as-comments), you're actually
making a contract (probably with your future self): I'm saying this function
does _this_ and _this_ is what it does.
<!-- more -->
Remember that the documentation must be a clear explanation of what your code
_is_ doing; remember that good messages will make [reading the code only by
the function documentation](/books/things-i-learnt/document-id) should be
clear.
A function called `mult`, documented as "Get the value and multiply by 2" but,
when you look at the code, it does multiply by 2, but sends the result through
the network or even just asks a remote service to multiply the incoming result
by 2, is clearly breaking its contract. It's not just multiplying by 2, it's
doing more than just that, or it's asking someone else to manipulate the
value.
Now, what happens when this kind of thing happens?
The easy solution is to change the documentation. But do you know if people
who called the function expecting it to be "multiply value by 2" will be happy
for it to call an external service? There is a clear breach of "contract" --
whatever you initially said your function would do -- so the correct solution
would be to add a new function with a proper contract -- and probably a better
name.
{{ chapters(prev_chapter_link="/books/things-i-learnt/document-it", prev_chapter_title="Documentation Is a Love Letter To Your Future Self") }}

28
content/books/things-i-learnt/document-it/index.md

@ -0,0 +1,28 @@
+++
title = "Things I Learnt The Hard Way - Documentation Is a Love Letter To Your Future Self"
date = 2019-06-21
[taxonomies]
tags = ["en-au", "books", "things i learnt", "documentation"]
+++
We all know writing the damn docs for functions and classes and modules is a
pain in the backside. But realizing what you were thinking when you wrote the
function will save your butt in the future.
<!-- more -->
When I say that it will save your butt, I don't mean the documentation will
tell you something like "Here are the lotto numbers in 2027"[^1] or "If John
complains about your future code review, here is some shit he did in the
past".
I mean, it will explain how the _flow_ of your code is expected to do. Imaging
this: pick your code and replace every function call to its documentation. Can
you understand what it is expected by reading that? If you can,
congratulations, you won't have a problem in the future; if you can't... well,
I have some bad news for you...
[^1]: Please, don't make me revise this in 2027... :(
{{ chapters(prev_chapter_link="/books/things-i-learnt/future-trashing", prev_chapter_title="Future Thinking is Future Trashing", next_chapter_link="/books/things-i-learnt/document-is-contract", next_chapter_title="The Function Documentation Is Its Contract") }}

26
content/books/things-i-learnt/future-trashing/index.md

@ -0,0 +1,26 @@
+++
title = "Things I Learnt The Hard Way - Future Thinking is Future Trashing"
date = 2019-06-21
[taxonomies]
tags = ["en-au", "books", "things i learnt", "design", "solution"]
+++
When developers try to solve a problem, they sometimes try to find a way that
will solve all the problems, including the ones that may appear in the future.
<!-- more -->
Trying to solve the problems that will appear in the future comes with a hefty
tax: future problems future will never come -- and, believe me, they _never_
come -- and you'll end up either having to maintain a huge behemoth of code
that will never be fully used or you'll end up rewriting the whole thing
'cause there is a shitton of unused stuff.
Solve the problem you have right now. Then solve the next one. And the next
one. At one point, you'll realize there is a pattern emerging from those
solutions and _then_ you'll find your "solve everything". This pattern is the
_abstraction_ you're looking for and _then_ you'll be able to solve it in a
simple way.
{{ chapters(prev_chapter_link="/books/things-i-learnt/languages-tests", prev_chapter_title="Good Languages Come With Tests", next_chapter_link="/books/things-i-learnt/document-id", next_chapter_title="Documentation Is a Love Letter To Your Future Self") }}

54
content/books/things-i-learnt/gherkin/index.md

@ -0,0 +1,54 @@
+++
title = "Things I Learnt The Hard Way - Gherkin Is Your Friend to Understand Expectations"
date = 2019-06-19
[taxonomies]
tags = ["en-au", "book", "things i learnt", "gherkin", "expectations"]
+++
Gherkin is file format for writing behaviour tests. But it can also give you
some insights on what you should do.
<!-- more -->
Alright, let's talk a bit about Gherkin:
[Gherkin](https://en.wikipedia.org/wiki/Cucumber_(software)#Gherkin_language)
is a file format created for [Cucumber](https://en.wikipedia.org/wiki/Cucumber_(software)),
which describes scenarios, what's in them, what actions the user/system will
do and what's expected after those actions, in a very high level, so people
without programming experience can describe what's expected from the system.
Although Gherkin was born with Cucumber, it is now supported by a bunch of
programming languages, through external libraries.
A typical Gherkin file may look something like this:
* **Given that** _initial system environment_
* **When** _action performed by the user or some external system_
* **Then** _expected system environment_
Or, in a more concrete example:
* **Given that** The system is retrieving all tweets favourited by the user
* **When** It finds a tweet with an attachment
* **Then** The attachment should be saved along the tweet text
Pretty simple, right?
Now, why I'm mentioning this?
Sometimes, specs are not the most clear source of information about what it is
expected from the system. If you're confused about what you should write,
asking the person responsible for the request to write something like Gherkin
may give you some better insights about it.
Obviously, it won't be complete. People tend to forget the error situations --
people entering just numbers on names, letter in age fields, tweets with no
text and just attachments -- but at least with a Gherkin description of the
system, you can get a better picture of the whole.
Also, you may not like to write specs. That's alright, you can replace them
with Gherkin anyway.
{{ chapters(prev_chapter_link="/books/things-i-learnt/steps-as-comments", prev_chapter_title="Write Steps as Comments", next_chapter_link="/books/things-i-learnt/integration-tests", next_chapter_title="Unit Tests Are Good, Integration Tests Are Gooder") }}

69
content/books/things-i-learnt/integration-tests/index.md

@ -0,0 +1,69 @@
+++
title = "Things I Learnt The Hard Way - Unit Tests Are Good, Integration Tests Are Gooder"
date = 2019-06-19
[taxonomies]
tags = ["en-au", "book", "things i learnt", "unit tests", "integration tests"]
+++
The view of the whole is greater than the sum of its parts. And that includes
tests for the whole compared to tests of single things.
<!-- more -->
First, I just don't want to into a discussion about what's the "unit in a unit
test"[^1], so let's take the point that a unit test is a test that tests a
class/function, not the whole system, which would require data flowing through
several classes/functions.
There are several libraries/frameworks that actually split this in a way that
you can't test the whole.
[Spring](https://spring.io/)+[Mockito](https://site.mockito.org/) is one of
those combinations -- and one that I worked with. Due the bean container of
Java, the extensive use of Beans by Spring and the way Mockito interacts with
the container, it's pretty easy to write tests that involve only one class:
You can ask Mockito to mock every dependency injection in one class and mock
every injected class, simply using annotations.
And this is cool and all. But the fact that we are making sure each class does
what it does, it doesn't give a proper view of the whole; you can't see if
that collection of perfectly tested classes actually solve the problem the
system is responsible for solving.
Once, in C++, I wrote an alarm system
[daemon](https://en.wikipedia.org/wiki/Daemon_(computing)) for switches. There
were three different levels of things the alarm system should do: It could
only log the message of the incoming error, it could log the error and send a
SNMP message, or it could log the error, send a SNMP message and turn a LED in
the front panel on. Because each piece had a well defined functionality, we
broke the system in three different parts: One for the log, one for the SNMP
and one for the LED. All tested, all pretty. But I still had a nagging
feeling that something was missing. That's when I wrote a test that would
bring the daemon up, send some alarms and see the results.
And, although each module was well tested, we still got one things we were
doing it wrong. If we never wrote an integration test, we would never catch
that.
Not only that, but because we wrote a test that interacted with the daemon, we
could get a better picture of its functionality and the test actually _made
sense_ -- as in, if you read the unit tests, they seemed disconnected from
what the daemon was expected to do, but the integration tests actually read
like "Here, let me show that we actually did what you asked". And yes, this
was akin to [Gherkin](/books/things-i-learnt/gherkin) tests, although I didn't
know Gherkin at the time.
Personally, I think over time integration tests are more important that unit
tests. The reason is that I still have the feeling that unit tests check if
the classes/functions have _adherence_ to the underlying _design_ -- Does your
view can actually work without the controller? Is the controller using
something from the model or using things that should be in the view? -- but
adherence to the design gets better over time -- developers start using the
layout from previous examples, so they capture the design by osmosis, while
the big picture starts to get more and more complex, with lots of moving
parts.
[^1]: There is no "unit" in "unit tests". "Unit test" means the test _is_ a
unit, indivisible and dependent only on itself.
{{ chapters(prev_chapter_link="/books/things-i-learnt/gherkin", prev_chapter_title="Gherkin Is Your Friend to Understand Expectations", next_chapter_title="Testing Every Function Creates Dead Code", next_chapter_link="/books/things-i-learnt/tests-dead-code") }}

52
content/books/things-i-learnt/intro/index.md

@ -0,0 +1,52 @@
+++
title = "Things I Learnt The Hard Way - Intro"
date = 2019-06-18
[taxonomies]
tags = ["en-au", "books", "things i learnt", "intro"]
+++
"Things I Learnt The Hard Way (In 30 Years of Software Development)" started
as a simple sequence of toots (the same as "tweets", but outside Twitter) when
I was thinking about a new presentation I could do.
But why "a new presentation"?
<!-- more -->
I go around my state with a group called "Tchelinux": We usually go to
universities and talk to people starting uni, explaining things about
free/libre software and sometimes telling people about things they wouldn't
normally see in the uni curriculum.
One thing that annoys me is that there are very few presentations about "when
things go wrong". All the presentations are either prototypes or tell the good
stuff, and hide all the wrong things that could happen[^1]. Obviously, after
working 30 years in the field of software development, I saw my fair share of
things going wrong -- sometimes in unimaginable piles of crap -- and I thought
"maybe that's something people would like to hear".
And that's when the toot sequence started. Just before I noticed, I spent the
whole day just posting this kind of stuff (fortunately, my pile of "incoming"
was a bit empty at the time) and it had 30 points, plus addendums and a few
explanation points. That's when I decided to group all them in a single post.
All I thought when I grouped everything in a post was "this will make things
easier for the people following the thread on Mastodon". But then the post
appeared on Reddit. And Twitter. And HackerNews. And YCombinator. And none of
those where mine.
But here is the thing: Each point was limited by the toot size, which is 500
characters. Sometimes that's not enough to expand the point, explain it
properly and add some examples.
And that's how the idea to write this "book" came to life.
One thing you must keep in mind here: *These are my options*. I understand
that not everything is so black and white as put here, and some people's
experiences may not match things here. Also, you get a bit cynical about
technology after 30 years. So... thread carefully, 'cause here be dragons.
[^1]: Yup, I'm guilty of that too.
{{ chapters(next_chapter_link="/books/things-i-learnt/disclaimer", next_chapter_title="Disclaimer") }}

25
content/books/things-i-learnt/languages-tests/index.md

@ -0,0 +1,25 @@
+++
title = "Things I Learnt The Hard Way - Good Languages Come With Tests"
date = 2019-06-20
[taxonomies]
tags = ["en-au", "books", "things i learnt", "programming languages", "tests"]
+++
You can be sure that if a language brings a testing framework -- even minimal
-- in its standard library, the ecosystem around it will have better tests
than a language that doesn't carry a testing framework, no matter how good the
external testing frameworks for the language are.
<!-- more -->
The reason is kinda obvious on this one: When the language itself brings a
testing framework, it reduces the friction for people to start writing tests,
and that includes the authors of the language itself and the community.
Sure, better frameworks may come along, and languages that don't have a
testing framework in their standard library may have options with better
support and easier access but, again, when they are there from the start, the
start is better and the final result is better.
{{ chapters(prev_chapter_link="/books/things-i-learnt/throw-away", prev_chapter_title="Be Ready To Throw Your Code Away", next_chapter_link="/books/things-i-learnt/future-trashing", next_chapter_title="Future Thinking is Future Trashing") }}

40
content/books/things-i-learnt/spec-first/index.md

@ -0,0 +1,40 @@
+++
title = "Things I Learnt The Hard Way - Spec First, Then Code"
date = 2019-06-18
[taxonomies]
tags = ["en-au", "books", "things i learnt", "specs", "code"]
+++
"Without requirements or design, programming is the art of adding bugs to an
empty text file." -- Louis Srygley
<!-- more -->
If you don't know what you're trying to solve, you don't know what to code.
A lot of times we have this feeling of "let me jump straight to the code". But
without understanding what problem you're trying to solve, you'd end up
writing a bunch of things that doesn't solve anything -- or, at least,
anything that _should_ be solved.
So here is the point: Try to get a small spec on whatever you want to solve.
But be aware that even that spec may have to be thrown out, as the
understanding of the problem tend to grow as long as the project continue.
Yes, it's paradoxical: You need a spec to know what to code to avoid coding
the wrong solution, but the spec may be wrong, so you _end up_ solving the
wrong solution anyway. So what's the point? The point is, the spec reflects
the understanding of a problem _at a certain point_: All you (and your team)
know is _there_.
The times I stood longer looking at my own code wondering what to do next were
when we didn't have the next step defined: It was missing some point of the
solution or we didn't have the communication structures defined or something
of sorts. Usually, when that happened, I stumbled upon Twitter or Mastodon
instead of trying to solve the problem. So when you see yourself doing this
kind of stuff -- "I don't know what to do next, and I'm not sure if I'm done
with the current problem" -- then maybe it's time to stop and talk to other
people in the project to figure that out.
{{ chapters(prev_chapter_link="/books/things-i-learnt/disclaimer", prev_chapter_title="Disclaimer", next_chapter_link="/books/things-i-learnt/steps-as-comments", next_chapter_title="Write Steps as Comments") }}

58
content/books/things-i-learnt/steps-as-comments/index.md

@ -0,0 +1,58 @@
+++
title = "Things I Learnt The Hard Way - Write Steps as Comments"
date = 2019-06-18
[taxonomies]
tags = ["en-au", "books", "things i learnt", "steps", "comments", "code"]
+++
Don't know how to solve your problem? Write the steps as comments in your
code.
<!-- more -->
There you are, looking at the blank file wondering how you're going to solve
that problem. Here is a tip:
Take the spec you (or someone else) wrote. Break each point into a series of
steps to reach the expected content. You can even write on your natural
language, if you don't speak English.
Then fill the spaces between the comments with code.
For example, if you have a spec of "connect to server X and retrieve
everything there. You should save the content in the database. Remember that
server X has an API that you can pass an ID (the last ID seen) and you can use
it to not retrieve the same content again." Pretty simple, right?
Now, writing this in comments, pointing the steps you need to make:
```
// connect to server X
// retrieve posts
// send posts to the database
```
Ah, you forgot the part about the ID. No problem, you just have to add it in
the proper places -- for example, it doesn't make sense to connect to the
server before you have the last seen ID:
```
// open configuration file
// get value of the last seen ID; if it doesn't exist, it's empty.
// connect to server X
// retrieve posts starting at the last seen ID
// send posts to the database
// save the last seen ID in the configuration file
```
Now it is "easy"[^1]: You just add the code after each comment.
A better option is to change the comments into functions and, instead of
writing the code between the comments, you write the functionality in the
function themselves and keep a clean view of what your application does in the
main code.
[^1]: Yes, that was sarcastic.
{{ chapters(prev_chapter_link="/books/things-i-learnt/spec-first", prev_chapter_title="Specs First, Then Code", next_chapter_link="/books/things-i-learnt/gherkin", next_chapter_title="Gherkin Is Your Friend to Understand Expectations") }}

45
content/books/things-i-learnt/tests-apis/index.md

@ -0,0 +1,45 @@
+++
title = "Things I Learnt The Hard Way - Tests Make Better APIs"
date = 2019-06-19
[taxonomies]
tags = ["en-au", "book", "things i learnt", "unit tests", "layers", "apis"]
+++
Testing things in isolation may give a better view of your APIs.
<!-- more -->
When I spoke about [integration
tests](/books/things-i-learnt/integration-tests) you may end up with the
impression that I don't like unit tests[^1].
Actually, I think they provide some good intrinsic values.
For example, as mentioned before, they can provide a better look at the
adherence to the design.
But, at the same time, they give a better view of your internal -- and even
external -- APIs.
For example, you're writing the tests for the view layer -- 'cause, you know,
we write everything in layers; layers on top of layers -- and you're noticing
that you have to keep a lot of data (state) around to be able to make the
calls to the controller. That's a sign that you may have to take a better look
at the controller API.
Not only that, but take, for example, the fact that you're working on a
library -- which will be called by someone else -- and you're writing tests
for the most external layer, the layer that will be exposed by the library.
And, again, you're noticing that you have to keep a lot of context around,
lots of variables, variables coming from different places and similar calls
using parameters in different ways. Your tests will look like a mess, don't
they? That's because the API _is_ a mess.
Unit testing your layers makes you the _user_ of that layer API, and then you
can see how much one would suffer -- or, hopefully, enjoy -- using that.
[^1]: Again, let's ignore for a second that there are no "unit" in "unit
tests"...
{{ chapters(prev_chapter_link="/books/things-i-learnt/integration-tests", prev_chapter_title="Unit Tests Are Good, Integration Tests Are Gooder", next_chapter_link="/books/things-i-learnt/tests-in-the-command-line", next_chapter_title="Make Tests That You Know How To Run on the Command line") }}

59
content/books/things-i-learnt/tests-dead-code/index.md

@ -0,0 +1,59 @@
+++
title = "Things I Learnt The Hard Way - Testing Every Function Creates Dead Code"
date = 2019-06-21
[taxonomies]
tags = ["en-au", "books", "things i learnt", "unit tests", "dead code"]
+++
If you write a test for every single function on your system, and your system
keeps changing, how will you know when a function is not necessary anymore?
<!-- more -->
Writing a test for every single function on your system may come from the
"100% Coverage Syndrome", which afflicts some managers, thinking that the only
way to be completely sure your system is "bug free" is to write tests for
every single piece of code, till you reach the magical "100% coverage" in all
the tests.
I do believe you can reach 100% coverage, as long as you're willing to
_delete_ your code.
Cue the universal grasps here.
But how do you know which pieces of code can be deleted?
When I mentioned [integration
tests](/books/things-i-learnt/integration-tests), I mentioned how much more
sense it made to me reading them instead of the "unit" tests, because they
were describing exactly how the system would operate in normal conditions. If
you write tests the go through the system, doing normal operations, and you
can get tests for all the normal cases -- and some "abnormal", like when
things go wrong -- then you know that, if you run those tests and they mark
some lines as "not tested", it's because you don't need them.
"But Julio, you're forgetting the error control!" I do agree, specially when
you're talking with project owners or some other expert, that people will
forget to tell you what to do in case of things going wrong -- say, someone
entering a name in the age field -- but _you_ can see those and _you_ know
that you need error control so _you_ can add the error control and describe
the situation where that error control would trigger.
If, on the other hand, you write a test for every function, when you do a
short/simple check, you'll find that the function is still being used in the
system -- by the tests, not actually, "value to the user" code. Sure, you can
use your IDE to go back and forth between code and test and see if it points a
use beyond the test, but it won't do it for yourself.
There is one other weird thing about trying to write integration tests for
error controls: Sometimes, you can't reach the control. It's true! I did wrote
control checks for every function once but, when running in the integration
tests, there was no way to produce an input at the input layer of the system
that would reach the error control in that function -- mostly 'cause the
other functions, which would run before the one I was trying to test, would
catch the error before it. If that's a design problem or not -- it probably
was -- it's a different discussion, but the fact is that that function didn't
need error control.
{{ chapters(prev_chapter_link="/books/things-i-learnt/integration-tests", prev_chapter_title="Unit Tests Are Good, Integration Tests Are Gooder", next_chapter_title="Tests Make Better APIs", next_chapter_link="/books/things-i-learnt/tests-apis") }}

37
content/books/things-i-learnt/tests-in-the-command-line/index.md

@ -0,0 +1,37 @@
+++
title = "Things I Learnt The Hard Way - Make Tests That You Know How To Run on the Command line"
date = 2019-06-19
[taxonomies]
tags = ["en-au", "book", "things i learnt", "tests", "command line"]
+++
You know that "Play" with a little something on your IDE that runs only the
tests? Do you know what it does?
<!-- more -->
A long time ago I read the story about a professor that taught his students to
code. He preferred to teach using an IDE, 'cause then "students have to just
press a button to run the tests".
I get the idea, but I hate the execution.
When we get into professional field, we start using things like [continuous
integration](https://en.wikipedia.org/wiki/Continuous_integration) which,
basically, is "run tests every time something changes" (it's a bit more than
that, but that's the basic idea).
Now, let me ask you this: Do you think the students of the professor above
would know how to add the command to run the tests in a continuous
integration system?
I know I'm being too picky (one could even call me "pricky" about this) but
the fact is that whatever we do today, at some point can be automated: our
tests can be run in an automated form, our deployment can be run in an
automated form, our validation can be run in an automated form and so on. If
you have no idea how those things "happen", you'll need the help of someone
else to actually build this kind of stuff, instead of having the knowledge
(well, half knowledge, the other half is the CI tool) with you all the time.
{{ chapters(prev_chapter_link="/books/things-i-learnt/tests-apis", prev_chapter_title="Tests Make Better APIs", next_chapter_link="/books/things-i-learnt/throw-away", next_chapter_title="Be Ready To Throw Your Code Away") }}

38
content/books/things-i-learnt/throw-away/index.md

@ -0,0 +1,38 @@
+++
title = "Things I Learnt The Hard Way - Be Ready To Throw Your Code Away"
date = 2019-06-19
[taxonomies]
tags = ["en-au", "book", "things i learnt", "code"]
+++
A lot of people, when they start with TDD, get annoyed when you say that you
may have to rewrite a lot of stuff, including whatever your already wrote.
<!-- more -->
TDD was _designed_ to throw code away: The more you learn about your problem,
the more you understand that, whatever you wrote, won't solve the problem in
the long run.
You shouldn't worry about this. Your code is not a wall: if you have to throw
it always, it is not wasted material. Surely it means your time writing code
was lost, but you got a better understanding about the problem now.
Not only that, but as you progress through your project, solving problems and
getting "acquainted" with the problem, you'll also notice that the
[spec](/books/things-i-learnt/spec-first) will also change. This means that the problem you solved
wasn't exactly the problem you _needed_ to solve; your code is trying to solve
something that isn't exactly the problem.
Also, this is really common -- the spec changing, not throwing the code away,
that is. One thing that you can be sure is that it won't change _everywhere_.
Some of the things you solved will stay the same, some others will be
completely removed and some others added. And you will see that you'll
refactor your code a lot, and throw a lot of code away. And not just code that
solves the problem, but also the tests for that code.
... unless you focus mostly on [integration
tests](/books/things-i-learnt/integration-tests).
{{ chapters(prev_chapter_link="/books/things-i-learnt/tests-in-the-command-line", prev_chapter_title="Make Tests That You Know How To Run on the Command line", next_chapter_link="/books/things-i-learnt/language-tests", next_chapter_title="Good Languages Come With Tests") }}

42
content/thoughts/things-i-learnt-the-hard-way-the-book.md

@ -0,0 +1,42 @@
+++
title = "Things I Learnt The Hard Way - The... Book?"
date = 2019-06-14
[taxonomies]
tags = ["en-au", "programming", "work"]
+++
Random thought about the previous post about "Things I Learnt The Hard Way".
<!-- more -->
When I wrote the post about "Things I Learnt The Hard Way", I never thought it
would gather the traction it did.
It was posted [on Reddit](https://old.reddit.com/r/programming/comments/bzipb5/things_i_learnt_the_hard_way_in_30_years_of/),
[on Lobste.rs](https://lobste.rs/s/hf0bkk/things_i_learnt_hard_way_30_years_software),
and it is being discussed on Twitter.
None of those was posted by me (except a single tweet, which didn't gather
that much attention).
Since then, I've added a bunch of new points -- as life goes on, I remember
another thing that I forgot to mention in the first time -- and when I write
this new post, the original now captures 83 (!!!) points.
But while the short format gives a quick idea of what I meant, it doesn't
properly explain the points -- and a lot of people are (correctly) raising
those in the discussion boards. So I feel I should really expand those.
And, at the same time, I have created a few macros for
[Zola](http://getzola.org/) (the blogging engine I'm using) to "publish" some
books (if you allow me to be really loose with the meaning of "publishing").
That's why I'm currently considering expanding the points in a digital book
format, also here in this blog, using each point as a chapter.
I'll still update the original post, but expanding the points into chapters
will give me more room to put my thoughts on each of them, and I can link each
point to the longer explanation.
Sounds like a plan, doesn't it?

1030
content/thoughts/things-i-learnt-the-hard-way.md

File diff suppressed because it is too large Load Diff

2
themes/nighttime

@ -1 +1 @@
Subproject commit c9a6aff5b4b60f19a2b784a830d5c2eba81a20fb
Subproject commit 658e800a1a1cecd1ab382e17dbab3bd6bba4e2ec
Loading…
Cancel
Save