Some handy Git Utilities

In the course of daily coding, you gradually pile up some little personal hacks you aren’t sure are worth publishing and sharing. The world of shell scripts in particular don’t seem to be nearly as commonly packaged and shared as, say, Javascript or Python. But, here are a couple of mine, for dealing with git repos, in case anyone ever finds them handy.

FWIW, I usually stash these in my ~/.profile, as described in this post.


Sometimes, you’re working on a branch, and you just want to update trunk without much fuss and without leaving your current branch:

function tu() {
  # tu =_T_runk _U_pdate: general function for updating a trunk branch while on another
  # Defaults to master branch for basic git compatibility
  # But takes alternate trunk branch name as only argument
  trunk="master" && [[ $1 ]]  && trunk=$1
  echo "**tu**: trunk branch is defined as: $trunk"

  # 1. cache current branch & take updates from default remote
  branch=`git rev-parse --symbolic-full-name --abbrev-ref HEAD`
  git fetch --prune

  # 2. update trunk
  echo "**tu**: Switching to trunk branch, $trunk..."
  BRANCH_EXISTS=`git checkout $trunk`
  # Just in case you don't actually have a branch with name, exit gracefully:
  if [ $BRANCH_EXISTS ]; then
    git pull
  
  # 3. now that trunk is updated, return to previous branch
    echo "**tu**: $trunk updated, returning to $branch..."
    git checkout $branch
  else
    echo "**tu**: $trunk branch does not exist. Exiting."
    set -e
  fi
}

And, you can create shorthands for particular trunk branches you might be using:

function mu() {
  # Branch-Specific Shorthand form of tu() for MASTER
  # DEPENDS ON tu() !
  tu master
}

(Which, in 2022, I probably oughta’ update to ‘Main’ branch. But… I’m playing catch-up here on sharing some old code.)

More commonly, and still useful, is the case where I’m mostly wanting to update my develop branch:

function du() {
  # Branch-Specific Shorthand form of tu() for DEVELOP
  # DEPENDS ON tu() !
  tu develop
}

OK, that’s maybe handy. Occasionally. But not something I obsess about. Why do I care? Ohhhh, right… REBASING.

I often want to do a quick rebase of my current branch onto the latest version of trunk. So, let’s put those building-blocks all together into a new more useful script which lets me do just that:

function turt() {
  # turt == Trunk Update, Rebase against Trunk
  # takes a single parameter: the name of your trunk branch, which defaults to master
  localTrunk="master" && [[ $1 ]]  && localTrunk=$1
  # use first character of trunk name to customize output
  first=${localTrunk:0:1}

  echo "**${first}ur${first}**: First, update $localTrunk..."
  tu $localTrunk
  echo "**${first}ur${first}**: Now, rebase $branch onto $localTrunk..."
  git rebase $trunk
}

And then to make it more specific to particular trunk branches you might be using:

function murm() {
  # murm == Master Update, Rebase against Master
  # DEPENDS ON turt() !
  turt master
}

function durd() {
  # durd == Develop Update, Rebase against Develop
  # DEPENDS ON turt() !
  turt develop
}

Of course, as much as I think most merges of trunk into a branch ought to be rebases instead, there’s a time and a place for merging in an updated trunk as well:

function tumt() {
  # turt == Trunk Update, Merge Trunk
  # takes a single parameter: the name of your trunk branch, which defaults to master
  localTrunk="master" && [[ $1 ]]  && localTrunk=$1
  # use first character of trunk name to customize output
  first=${localTrunk:0:1}
  echo "**${first}um${first}**: First, update $localTrunk..."
  tu $localTrunk
  echo "**${first}um${first}**: Now, merge $localTrunk into $branch ..."
  git merge $trunk
}

function mumm() {
  # mumm == Master Update, Merge Master
  # DEPENDS ON tumt() !
  tumt master
}

function dumd() {
  # durd == Develop Update, Merge Develop
  # DEPENDS ON tumt() !
  tumt develop
}

Effectively Shell-ing: Decouple your environment variables from specific shells by sourcing .profile everywhere

Since this is about the shell, I’m going to make a plug for my friend Dave Kerr’s amazing ebook: Effective Shell. If you’re going to muck about in terminals/shells, you really ought to go read it.

….

No. Really. I’ll wait. You’d be stupid not to. Go read it, or at least start reading it, then come back.

OK. Now that you’re a shell ninja, and know how some strong shell-fu can make an engineer more powerful… let’s talk variables and imports.

If you’ve ever so much as setup a dev environment and installed some non-trivial tools, you’ve likely encountered installation guidance like this (example from reactnative.dev, as of 08.22):

Note how they’re already qualifying the fact that you might have to put these exports in one file if you use bash, or in another file if you use Zsh. But they’re not done yet. They go on to clarify that if you’re ever using some shell other than the one you already configured, you’ll then still need to source those variables into your current shell:

So… sure… you could do all that. And, you could wrestle with the fact that other bits of your system (such as XCode) will periodically open up their own choice of shell (often bash), in a non-interactive way, where you can’t even exercise this option. And so everything will break. And you could accept that as inevitable. (Sucker.)

OR, you could take advantage of a simple and inexplicably overlooked option which makes this whole problem go away. In addition to the shell-specific files already mentioned above…

~/.bash_profile
~/.zprofile

…there is by default another, more universal file on POSIX-compliant systems: ~/.profile. (That’s the user-specific version for your own account. In theory, you could also tinker with etc/profile, but as a general rule I prefer to stick to messing about with my user files rather than system-level ones.)

Sure, you could run around manually source-ing some other profile into your current shell… but why would you? Why not go with the even easier and more sustainable option of ensuring that you always have all the same environment variables in all your shells?

First, instead of copying them hither and yon… simply place all needed variables/exports into one place: ~/.profile instead of repeating them across all the bash/zsh-specific versions.

Then, open up your user’s shell-specific configuration files (such as ~/.bashrc and/or ~/.zshrc, and/or ~/bash_profile and/or ~/zprofile… or those for rbash, dash, tmux, etc. ) and add to them all a single simple line:

[[ -s "$HOME/.profile" ]] && source "$HOME/.profile"

# SOURCE the master sh .profile, for concerns shared across all shells, login and non-login

[[ -s "$HOME/.profile" ]] && source "$HOME/.profile" # Load the default .profile

# Everything after this should be specific to your Zsh/bash login shells

That’ll do it. You should now have all the same variables in all your shells, because they’re all source-ing from the same location.

And, if you’d maybe like to embellish that with some explanatory comments so you remember what’s happening here:

# SOURCE the master .profile, for concerns shared across all shells, login and non-login
[[ -s "$HOME/.profile" ]] && source "$HOME/.profile"
# Everything after this should then be specific to your interactive shells

I have also found it useful to provide an indicator so I can tell when these variables have been imported successfully into a given shell, so I include this at the end of ~/.profile:

###### LAST
echo ".profile is sourced"

Once that’s done, I find that my individual shell config files can be very lean and mean, with only shell-specific items. For instance, my ~/.zshrc can exclusively be about Zsh customizations like the amazing oh-my-zsh, which won’t do me any good over in ~/.bashrc. Meanwhile, it’s only in ~/.profile that I need to write universal workflow utility scripts and project aliases such as:

alias rns="cd ~/PROJECTS/react-native-scaffold && nvm use --lts"

Notes:

If you want to be more surgical, it may be useful to understand the difference between –profile files and –rc files, between interactive shells and non-interactive shells, and login vs. non-login shells. Maybe check out this article and/or this one, or go finish reading the related topics in Effective Shell. That said… it’s complicated, especially for Mac users, which is why I’ve been pretty general about which files you might need to import the ~/.profile into. YMMV.

‘Self-Documenting Code’ is Not Enough: Why Code Needs Inline Comments

But don’t waste your time documenting what you did; document what you thought.


Coding is hard. And the thing you spend the most time doing isn’t typing: you spend the most time reading and trying to understand code. This applies not only to other people’s code, but also to your own, more than 10 minutes after you wrote it.

This is why documenting your code is so critical, as well as taking the time to write clean, readable code. Clear code is far better than clever code. (In fact, I might go so far as to say that clever code is usually a premature and self-justifying optimization, unless accompanied by myriad apologies and disclaimers.)

But, somewhere along the way, the virtue of clarity somehow becomes controversial. Some of us are sailors on the Aegean, and hear the siren song of ‘self-documenting code’ calling us to dash ourselves against the rocks of piety and self-delusion.

Today, I spoke with a wonderful mid-level engineer about some code that lacked any inline comments whatsoever. (He hadn’t written it.) He kind of agreed that was an issue… but then also disagreed, and explained why. A little while ago, a more senior engineer (herself no dummy) had told him comments were bad, and distracting… that all code should be self-documenting. Which… sounded right. Who could disagree with the idea of naming things well? Of making code readable? So, he stopped writing comments.

First, let’s just acknowledge: there are a LOT of useless comments out there. If the CSS selector is ::-webkit-scrollbar, and then it has a rule that says background: transparent, I do NOT need an annotation next to that rule which that combines 2 of those obvious keywords with an action verb to tell me it will: /* make scrollbar transparent */. I mean… I’m grateful someone’s commenting, and I’m not saying they’re a bad person who should burn in hell, but just… they should save their effort for something actually useful, eh?

Further, there’s a particular affliction of certain IDE’s and other automated tools, whereby they produce metric tons of boilerplate comments that nobody ever reads, and which in fact reduce the overall signal-to-noise ratio. This is also bad.

But don’t throw the baby out with the bath water. The problem is not the institution of inline documentation of code, but its misuse. Just as statistics themselves are not dishonest (a la: “lies, damn lies, and statistics!”), inline comments are not a bad thing… but they are sometimes abused to bad effect.

Further, writing code that is as self-documenting ** as possible ** is an unmitigatedly Good Thing. Variable, class and function names should indeed be rich and explanatory! Comments that re-explain what’s objectively obvious are noise without benefit. There are only two hard things in CS, etc., etc.

But not all code can self-document, and not all things can be present in code. In particular, code is only your implementation of an idea, but the most important things to document are those things which are not present:

  • The intent/purpose of code
  • Whatever’s hiding on the other side of an interface (call and return signatures, let alone behavior)
  • Any alternative implementation paths which didn’t work for some reason: these are landmines to warn your successors away from

At a minimum, inline documentation should address those things: it should explain that which your code cannot, at least not without successors needing to dive deep into reading the source of all the things on the other side of an interface. And after all, isn’t the point of documentation to save time? To avoid reinventing wheels, to avoid breaking useful abstractions?

But there’s also another point to documentation, and it’s the same point often raised about testing: writing about your code forces you to think it through, to understand it yourself better than you would if you didn’t examine and define your assumptions. And that kind of documentation interlocks with your testing: when you write down the arguments you expect your code to accept, and the responses it should provide, you give yourself a list of targets to hit, and you remind yourself of all the ways you could blow it.

Sometimes your user story defines those targets for you… although unless you provide a link to the story, that doesn’t help anyone who comes after you. And other times, as when you write a helper utility, or some other piece of behind-the-scenes plumbing, not even JIRA can tell anyone what you were trying to do. It’s up to you to explain that, and up to you to force yourself to be clear and explicit about your intentions… so that the resulting quality of your implementation can document itself.

Moreover, in stating your intentions and assumptions… you allow your successors to determine where your code failed to live up to them. But if you state your assumption (for instance that you expect only numbers as arguments to a function), you let me help find what you overlooked (that Javascript and other dynamically-typed languages can easily coerce non-numeric values in ways you didn’t anticipate).

Last, half the point of code review is for the reviewer to test whether code is clear and understandable without requiring an exorbitant amount of context, and without requiring the reviewer to go read a bunch of source somewhere else:

Reduce WTF/m. Comment your code, not by restating what’s obvious, but by exposing what’s not.

Cleanliness is next to godliness, in a dev’s consoles

(This piece is based on my current Javascript development stack, but the principle applies almost anywhere.)

So, the whole premise of coding in a complex project, with an environment that is itself likely quite complex, is to be able to conserve your attention. To focus it like a laser on the Things That Matter, and to notice those things, when they appear. But that becomes impossible if you’re surrounded by noise that drowns out the Things That Matter.

Consider your browser’s Javascript console, which you ideally have open all the time while you develop. Also, chances are you’ve got a terminal/TTY window open as well, running your servers/builds while you work. Consider that, too.

If there are two unusual things in there, you’re likely to notice both of them. And you can do something about them. But if there are 107, you’re likely not to. So,

PREMISE #1: CLEAN UP ALL THE CRUFT IN YOUR RUNTIME CONSOLES BEFORE THIS BECOMES A BAD NEIGHBORHOOD

`console.log()` statements pile up like the fine snow that begins a blizzard. At first, you barely notice the light layer of fuzz, and everything feels cozy and lived in. And then suddenly you can’t see your mailbox anymore and you’re wondering whether your kids and dog made it inside alive. It’s pretty much an exponential curve.

Or, call it the ‘broken windows theory of console statements’: it only takes the first couple of them ruining the place and suddenly it’s urban blight that nobody will bother to maintain, or could if they wanted to. And instead of just fixing the first few broken windows when they happened, it gets out of control, and you find yourself needing a wholesale urban restoration initiative. People lock their doors and roll up their windows when they drive by your console.  You don’t want that.

Further, there’s even some pretty heavy-duty research about the effect of messy workspaces on human productivity. One methodology that describes this is the 5S approach to manufacturing. So… 5S your consoles.

So, it’s OK if you scatter some `log()` statements around while you’re working. Heck, even if you commit them. But clean them up before you push, or otherwise make your work permanent. Even if it’s a repo of one, it’ll quickly become a dark alley that even you don’t want to travel down after dark. And if it’s a repo you share with others, it gets out of control much more quickly.

Make use of something like ESLint to warn you about them. You can run it as part of your PR sequence, or during your local build. But one way or another, you want something to nag you annoyingly to get rid of them before you push and inflict them on everybody else on your team. Or get used to them yourself. Because if nothing’s nagging you to fix them, you’ll be buried under them before you know it.

PREMISE #2: WHY IS THAT ERROR HAPPENING? YOU *NEED* TO KNOW WHY THAT ERROR IS HAPPENING!!!

The problem with letting your cruft pile up is that you stop noticing things. Nothing sticks out any more. So, even a really important error that actually explains why everything just broke – or will, soon – is just another signal buried in the noise.

All that noise makes you numb. It’s sensory overload. You desperately need to cut down on the noise, so unexpected errors grab your attention when they happen.

And when they do happen, you need to jump on them. Even if you just file an Issue for now while you wrap up something else, it needs to be remarked upon and tracked down, until you can at least understand why it’s happening. And if you’re OK with the reason once you understand it, then feel free to leave it there.

For a little while. But once you have more than one of those, it counts as cruft and noise that drowns out the signals you need to see. <GOTO: PREMISE #1> And thus you need to find a way to get rid of it.

But if you don’t understand it, and if the existence of that signal doesn’t bother you… you’re a problem. When I interview people whose sample code produces random errors they don’t understand, they don’t get hired. I need you to need to understand what’s causing that error. I don’t actually care, under the right circumstances, if you fixed it. But if you didn’t care enough to even understand it, or try, or even to ask about it, you’re toast.

And if you’re working on a project with me and you do that? If we’ve both – and perhaps our colleagues, too – poured weeks or months of blood and sweat into something, and it doesn’t bother you that you created something that’s pumping out errors from our lovely code… then we have a problem. You’re hurting the project. You’re making me sad. Please, please, don’t make me sad. Figure out what’s causing the error.

Or, use this principle: Say No! Then Go! And tell someone you trust! Seriously: it’s OK if you really don’t understand an error yourself. But you gotta’ ask for help.

Because accepting bad things like that without a fight, and letting it ride indefinitely is… well, it’s a lot like having some big oozing sore on your forehead that you haven’t bothered to wonder about or treat. People won’t like you. The other kids at school will make fun. Parts of you will fall off and die. That sort of thing. Don’t recommend it.

Conclusion: It’s Not Just Aesthetics, It’s Effectiveness

To conserve your attention, you need to be able to focus on what matters. That means cutting down on useless signals that try to distract you, and make it harder for you to do your work. 5S up your consoles: you’ll be glad you did.

And, then once you have a clean console and can see what’s right in front of your eyes: don’t ignore it. Whatever is still left is there for a reason. Understand it, or your missing the point, and making it harder to keep a clean workspace.

JSChannel 2015 Video, plus ReactConf Video, Etc.

Anyone who missed JSChannel 2015, or who just wants to re-view it, can catch the video on YouTube! See Day 1, Day 2. (FWIW, my React/Flux Workshop begins at about 3:47:25, but avoid it if you can’t handle seeing a couple cringe-worthy one-time compilation failures I still can’t explain. 🙂

And, since I was talking about React that day, it’s also worth checking out the video from React.js Conf 2015. Plus, there’s the React-Native talk from F8. And React-Europe.

Workshop Slides from JSChannel Conference Bangalore 2015: “React & Flux”

Thanks to the JSChannel team for having me! Slides are available below, and the associated GitHub repos are here: jschannel-react-demos & jschannel-flux-demos

Continue reading

Why You Should Never Again do a Scrollable List with ng-repeat

Anybody who’s ever done a long, scrollable list of items in a web interface knows that this can absolutely kill performance of the whole page, let alone the scrolling itself. People blame tools like Angular’s `ng-repeat` for performance problems, but the problem isn’t the tools: it’s just that they make it so damn easy to do things… you should not do. Performance of your page is inversely proportional to the DOM count, and a massive list of items may square or cube the number of elements in the interface.

We do all kinds of workarounds to deal with this: pagination, “infinite” scroll (which is badly misnamed, but I digress), etc. But there’s now a much easier, cleaner way to have as many items as you want, without penalty: don’t create elements for all the items in your list. Just however many will fit in the window, plus two or so. And recycle them into whichever position your user is scrolling in. (It’s Green!™) Conceptually, it’s very simple. Implementation-wise… most of us would like someone else to do the heavy lifting.

I first saw this approach mentioned in the Sencha Labs post about their “FastBook” experiment, in which they called it an ‘Infinite List’. Thereafter, I was surprised I hadn’t seen that approach adopted elsewhere in JavascriptLand soon after, and am sad I never got around to trying to build one myself. (Maybe someone else did. I missed it. Please share other implementations in the comments!)

But now, the Ionic UI framework has an Angular directive for this, which they call the `collection-repeat`. You. Must. Use. This. In Ionic, out of Ionic, whatever. And hack it, port it to other frameworks, tweak it, extend it, and just generally make sure that you never do long lists with naiive ng-repeat… ever again.

Tagged , , , , ,

About Using CSS Floats for Primary Layout: Stop It Already

The piece of my CSSConf Asia talk that generated the most heat in the Twittersphere was the following claim:

Floats were invented for one reason and one reason alone: to allow text to wrap around an image. They were not intended to do what we do with them. I know this hurts to hear it: but Floats are the table-based layout of our time. Unless you have to support IE6, stop using them, except in very specific, very limited situations.

So, let’s just add some quick clarity to that, as conference talks need to be more concise than is ideal: floats are cool for tactical uses, and are downright invaluable if you’re supporting old IE and can’t use display:inline-block as yet, let alone Flexbox. Even with inline-block available to you, there are still places where floats are super-handy, like kicking a single element in a toolbar off to the right, or when you really do want to wrap your inline text around an image. Continue reading

Tagged , , , , ,

My CSSConf Asia Video is Now on YouTube: “Scalable CSS”

Woops! I’ve just realized I never posted my slides, either. Here they are:

Tagged , ,

Why I’ve stopped doing single-line-define-and-assign on Angular $scope

I’ve been using Angular for all of my rapid-prototyping work for almost three years now. I’m very comfortable with the Angular idiom. But there’s one piece of it that I’ve entirely abandoned as bad hygiene and broken functionality (at least when mixed with ‘function statements’): the ‘single-line-define-and-assign’ pattern for variables and functions we want to expose via Angular’s $scope:

$scope.myVar = 12345;
$scope.myFunction = function () {/* do stuff */};

Instead, I do it like this:

var myVar = 12345;
function myFunction() {
  /* do stuff */
}

$scope.myVar = myVar;
$scope.myFunction = myFunction;

<screaming ensues> I know, I know. “Ur doin’ it wrong!” But read on, and scream after.

Continue reading

Tagged , ,