Death by a thousand existential checks

· erock's devlog

You can keep your optional chaining
#typescript #programming

Existential checks are when we have to detect whether or not a variable has a value - that is, checking to see if a variable exists. If the value is null, undefined or otherwise falsy, then it fails the check. This usually takes the form of an if-statement.

1if (thingThatExists) {
2  // do something with `thingThatExists`
3}

They are a natural -- and often necessary -- part of codebases. However, their over abundance can make the readability of the codebase difficult. When existential checks are nested within existential checks, it becomes difficult to understand the context of the code we are trying to read.

In this article I will try to demonstrate, where they are used has a dramatic effect on code reuse, readability, and maintainability.

# What's the problem with existential checks?

# It increases code nesting

As code structure determines its function, the graphic design of code determines its maintainability. Indentation -- while necessary for visualizing the flow control a program -- is often assumed to be merely an aesthetic appeal. However, what if indentation can help determine unnecessary code complexity? Abrupt code indentation tends to convolute control flow with minor details. Linus Torvalds thinks that greater than three levels of indentation is a code smell which is part of a greater design flaw

Now, some people will claim that having 8-character indentations makes the code move too far to the right, and makes it hard to read on a 80-character terminal screen. The answer to that is that if you need more than 3 levels of indentation, you're screwed anyway, and should fix your program.

Jeff Atwood thinks that nesting code has a high cyclomatic complexity value which is a measure of how many distinct paths there are through code. A lower cyclomatic complexity value correlates with more readable code, and also indicates how well it can be properly tested.

# Deeply nested structures are a bad idea.

I started my career in software engineering by trying many different programming languages. I took something along with me after diving into writing pragmatic and idiomatic code for each language.

For python, PEP 20 describes a set of design principles that every python developer should think about when architecturing a codebase. There's one line in this principle that I think about all the time:

Flat is better than nested

This guiding principle has led me to what I believe is more maintainable, readable code. When we apply this principle to data structures, that means we should avoid deeply nested data structures.

Deeply nested structures are difficult to understand -- even when strongly typed -- and they tend to promote cases where a nested object could be empty. This has the consequence of requiring developers to make many existential checks, especially if the data is used often.

Redux also has an great set of recommendations for organizing application state that also strongly recommends normalizing state.

There are many articles about how to normalize state, but the tldr is to think of an application's state like a relational database: each object is a database table where the key is the id and the value is the object data. This makes data easier to query, update, and reuse.

# It sets poor expectations for other developers and makes it harder for them to grok the codebase.

When objects can be empty, it sets terrible expectations for the end developer. Here are some questions it can raise:

With any project-size, it is important for us to set clear expectations with our code. Setting expectations leads to more readable and manageable code. When interfaces are littered with optional or nullable properties, we set terrible expectations. Therefore, we make a concerted effort to minimize the number of optional or nullable properties in our front-end codebase.

# It usually means we forgot an important step in our data pipeline.

When it comes to web development, I regularly have to build a pipeline to consume an API that is usually a separate service, constructed by a set of HTTP endpoints that we have to extract, transform, and load into front-end applications. When I first started building front-end web applications, I got into a really bad habit of skipping the transform step. I would take the API response and load it directly into my application state. By ignoring the transformation step, it made it harder to make updates to the codebase when an API endpoint changed. APIs are not always built with the consumers in mind. They are built in terms of being RESTful, with strict rules on how data should be formed and sent to their consumers.

Another side-effect of ignoring the transformation step is pushing optional properties to the view layer (e.g. react components). There's no quicker way to complicate a react component than to make a bunch of existential checks inside the render body even if it uses the new syntax sugar of optional chaining.

 1import { useSelector } from 'react-redux';
 2
 3interface Author {
 4  id: string;
 5  username: string;
 6  name: string;
 7}
 8
 9interface Blog {
10  id: string;
11  body: string;
12  author: Author | null;
13}
14
15interface Props {
16  blogId: string;
17}
18
19const selectBlogs = (state) => state.blogs || {};
20const selectBlogById = (state, { id }: { id: string }) =>
21  selectBlogs(state).[id];
22
23const BlogArticle = ({ blogId }: Props) => {
24  const blog = useSelector(
25    (state) => selectBlogById(state, { id: blogId })
26  );
27  if (!blog) {
28    return <div>Could not find blog article</div>;
29  }
30
31  return (
32    <div>
33      <div>{blog.body}</div>
34      written by: {blog.author?.name}
35    </div>
36  );
37};

This example is meant to demonstrate how code can become more complicated when there are existential checks inside our react components. We have made no guarantees for the data we are sending to the view layer and as a result we have to make many existential checks and fallbacks to accommodate.

# How do we avoid existential checks?

# Make optional properties the exception, not the rule

Instead of accepting what the backend sends us, we should instead create a consistent and reliable set of data structures that our app uses. Optional properties should be an exception, not the rule. Let's build some ideal interfaces without optional or nullable properties and then figure out how to build our state with it.

 1interface Author {
 2  id: string;
 3  username: string;
 4  name: string;
 5}
 6
 7interface Blog {
 8  id: string;
 9  body: string;
10  author: Author;
11}

It's a pretty simple exercise, we go through and remove the possibility of a property not existing or the property being null. This is great, but how do we make this interface a reality with the data we are being provided?

# Build entity factories

The general approach to retrieving data from our redux store is to always send the object that the user is requesting. Instead of our selector potentially returning null or undefined, we can return a default blog object, with sane defaults for each property. This is a concept inspired by golang. Variables without an initial value are set to their zero value. This means that every entity in our front-end codebase has an entity creation function that accepts a partial of that entity and does a simple merge.

 1const defaultAuthor = (author: Partial<Author> = {}): Author => {
 2  return {
 3    id: "",
 4    username: "",
 5    name: "",
 6    ...author,
 7  };
 8};
 9
10const defaultBlog = (blog: Partial<Blog> = {}): Blog => {
11  return {
12    id: "",
13    body: "",
14    author: defaultAuthor(blog.author),
15    ...blog,
16  };
17};
18/*
19  console.log(
20    defaultBlog({ id: '123', body: 'blog content!' })
21  );
22  {
23    id: '123',
24    body: 'some content!',
25    author: {
26      id: '',
27      username: '',
28      name: '',
29    }
30  }
31*/

This concept of creating default entities or fabricators is a concept used in ruby, primarily for specs but also can be used for anything.

By spending a little up-front time building entity factories, we save a ton of time for every end-developer that needs to create a new entity. It seems tedious, but we've been able to scale this concept to even massive entities with good ROI. Our human labor will pay off. We're not using a library to do this for us - it's straight-forward and easy to copy/paste.

Default entity functions help:

# Transform data from HTTP requests

Back to the fundamentals of ETL, it is imperative that we do not skip building the T in ETL. The way we do this is by creating a deserializer for each entity in our API responses.

You can see in our responses where our original interfaces came from: this is what the API is sending us. We shouldn't continue the trend of maybe having properties or maybe having an object.

 1interface AuthorResponse {
 2  id: string;
 3  user_name: string;
 4  name: string;
 5}
 6
 7interface BlogResponse {
 8  id: string;
 9  body: string;
10  author: Author | null;
11}
12
13// always type yor API responses!
14interface BlogCollectionResponse {
15  blogs: BlogResponse[];
16}
17
18// always create a deserializer for each entity
19function deserializeBlog(blog: BlogResponse): Blog {
20  return {
21    id: blog.id,
22    body: blog.body,
23    author: deserializeAuthor(blog.author),
24  };
25}
26
27function deserializeAuthor(author: AuthorResponse | null): Author {
28  if (!author) {
29    return defaultAuthor();
30  }
31
32  // you can see here that we change
33  // the API response from `user_name` to `username`
34  return {
35    id: author.id,
36    username: author.user_name,
37    name: author.name,
38  };
39}
40
41async function fetchBlogs() {
42  const resp = await fetch("/blogs");
43  if (!resp.ok) {
44    // TODO: figure out error handling
45    return;
46  }
47  const data: BlogCollectionResponse = await resp.json();
48  const blogs = data.blogs.map(deserializeBlog);
49  // TODO: save to redux
50}

This seems tedious, but a developer writes it once and now we have:

This ETL structure is the basis of our front-end business logic and has scaled well to date.

# Avoid existential checks in react components

All of the work in the previous sections should pay off now, let's see what it looks like.

 1import { useSelector } from "react-redux";
 2
 3interface Author {
 4  id: string;
 5  username: string;
 6  name: string;
 7}
 8
 9interface Blog {
10  id: string;
11  body: string;
12  author: Author;
13}
14
15const fallbackBlog = defaultBlog();
16const selectBlogs = (state) => state.blogs;
17// here we use a fallback blog for when
18// we cannot find the blog article
19const selectBlogById = (state, { id }: { id: string }) =>
20  selectBlogs(state)[id] || fallbackBlog;
21
22const BlogArticle = ({ blogId }: Props) => {
23  const blog = useSelector((state) => selectBlogById(state, { id: blogId }));
24  return (
25    <div>
26      <div>{blog.body}</div>
27      written by: {blog.author.name}
28    </div>
29  );
30};

What did we accomplish?

This last point is interesting: we don't need a loader to prevent this code from throwing an error, it's safe to use while data is being fetched. We can defer implementing loading states until later.

One could argue showing an empty blog post isn't much of an improvement. But the point isn't that we are missing a critical message to the user, it's the fact that we can defer the decision to handle the blog not found error case until later. It's hard to articulate with a trivial example the impact this change has on a codebase, but let's say we want to add messaging. In this case, we will need at least one existential check. What we normally do is create a helper function that performs the check for us and then use that inside the react component.

1const hasBlog = (blog: Blog): boolean => blog.id != "";
2if (!hasBlog(blog)) {
3  return <div>Could not find blog article</div>;
4}
5// ...

# Conclusion

The goal of flattening our objects is not to save on lines of code; rather, it's to build a scalable, readable, and maintainable architecture that is predictable to use.

Architecting code is hard. With a little planning and pushing a few existential checks to the transform layer of ETL, we end up with a repeatable pattern for dealing with optional or nullable properties, and making our view layer easier to build. You can keep your optional chaining.

I have no idea what I'm doing. Subscribe to my rss feed to read more of my articles.