The importance of command line interfaces (CLI)March 17, 2022
Having a basic understanding of how to work with command-line utilities is a critical computing skill - for programmers and for everyday end-users alike. Many typical computer users (non-programmers) believe command line programs are synonymous with the bygone era of MS-DOS; and represent something that isn’t updated, out of vogue, or a security risk. In reality, this is hardly ever the case, and simply due to a lack of understanding.
In an era of productivity; where modularity (and divide-and-conquer) is critical on big projects, and resources like GitHub are driving meaningful, focused work on specific tools; the proliferation of CLI applications is at an all-time high. We entice individuals and teams to focus on one specific domain problem, and develop tools that solve that problem very well, and provide the means that any application can use the tool (their work, under licence). This idea is nothing new and is referred to as the UNIX philosophy since 1978.
The UNIX philosophy emphasizes building simple, short, clear, modular, and extensible code that developers other than its creators can easily maintain and repurpose.
- Write programs that do one thing and do it well.
- Write programs to work together.
- Write programs to handle text streams, because that is a universal interface.
Those three bullets are akin to CLI software, using a counter-proof below. But first, know that there are two means to distribute programs:
- CLI (command-line interface)
- GUI (graphical user interface)
If we look at how GUI tools go against the UNIX philosophy principles, it becomes more clear why the UNIX philosophy is best showed via CLI.
Write programs that do one thing and do it well - GUI software can get lost in focusing on user experience & user interfaces (UX/UI) easily, and lose complete focus on what the tool is supposed to be solving. Conversations easily regress into where to place buttons, GUI themes, or menu screens, contributing nothing in the way of solving problems. This is bad in team-driven projects where those who cannot contribute to algorithmic solutions will draw attention to the GUI instead, and deter end goals. In the web-application space this has become a notable issue and forked two separate career paths: front-end developers - who work on the UX/UI mentioned here (the view); and back-end developers who focus on the algorithmic/database-driven solutions (the model).
Write programs to work together - GUI software is almost impossible to work inter-operably with other software. If lucky, some GUI applications will let you explore CSV/Tab-delimited files for import into other tools; but still requires human effort - and is pretty much the limit as to what can be done. To use the result of one GUI tool, in another GUI tool, requires human effort of clicking, navigating, and observing. This is why robotic process automation has become a $2B industry; which uses what is known as COM/dll-hooks or mouse-emulation to overcome this issue. Read more here.
Write programs to handle text streams, because that is a universal interface. - GUI software limits this in two ways - by using proprietary binary formats (formats that only the GUI understands and are not in plain text); or by encapsulating the underlying data such that it cannot be exported for external data manipulation and re-imported back into the GUI.
There is an excellent article on the importance of plain text files you can read here; but to summarize the criticality of plain text:
- Portable - Every device made, even those long gone, support plain text.
- Non-commercial - It will outlive every company, and it will outlive you; at zero cost.
- Offline - No cloud storage, minimal access overhead.
- Convertible - Can be converted to any other format; Word, HTML, Latex, Markdown. It is much more difficult to go in the other direction.
- Trackable - Differences can be tracked; Git (source control) can be used to trace entire history/evolution of edits. “Report Final Final draft v3 (1).docx” is not acceptable.
GUIs are fancy front-ends for CLI back-ends
You probably use CLI programs, and just don’t know it.
Ever look in the install directory of your favourite software? Notice there is almost never a single executable in the folder? If you sniff around there may be nearly a hundred executables on large software tools. There may also be lots of .dll (library) files.
This is because a lot of what you do in a GUI is passed to a CLI program and read back by the GUI, and displayed to the user. The UNIX philosophy, at work.
It also makes it very easy for software to be updated and patched. As only specific affected executables or libraries need to be updated; rather than downloads of the entire software anew.
How to use CLI programs
These are the basics to use CLI programs, in a Windows environment. Linux and MacOS are not covered as few corporate settings outside of tech-companies utilize it daily - and those that do, will certainly have this pre-requisite knowledge.
The Shell (command prompt)
A “shell” is another name for command prompt. You can think of it like a shell of a snail - it embodies (and protects) all the internal functionality of the operating system (organism) from the external environment. Working via the shell, is a way of working with internals. You may also hear the shell referred to as a terminal.
Windows has two built-in mechanisms for the shell.
- Command prompt (cmd)
- PowerShell (ps)
PowerShell will not be discussed here. It is a little more targeted to system administration and scripting; and has a few more layers of complexity.
Opening command prompt (cmd)
The easiest way to open a command prompt, and interact with a CLI application.
- Navigate in Windows Explorer to where your CLI application is located.
- Type into Windows Explorer’s path bar, the letters “cmd” and hit “ENTER”.
- This will open a prompt directly in the folder you were at.
- Type the name of your exe (you can omit the .exe extension).
There are other ways of opening a prompt, but I find this to be a succinct approach.
Command line flags (specified with - or –)
These are used to pass options to a CLI program. They could be something like -f for specifying a file to read, or –help to see a list of all flags (options). They vary based on how the programmer coded the tool; there’s no standard whatsoever. Even internally, within the source code itself, there is multitudes of ways the program can read in flags and all sorts of different programming approaches.
Generally if you’re unsure, –help or -h will display a list of options. Also some flags may be mandatory, some may be optional. Sometimes this is specified in the help with an [optional] or [mandatory] indicator, but again, there’s no standard. Sometimes the order of the flags matter as well, but alas it all depends how the programmer envisioned the parameters to be utilized.
application.exe --flagstyle1 -flagstyle
Generally, whitespace (blanks) need to be wrapped in quotes (also called ‘quoted’), and if not wrapped in quotes, escaped with a backslash.
application.exe --input-file "C:/Path with spaces/input.txt"
In the example above, unless quoted, the program may interpret C:/Path, with, and spaces/input.txt each as input parameters to the program due to the spaces. The program is not aware you are passing it a file path, it has no way of knowing that; so by wrapping the path in quotes you are telling the program it should be read in whole, as one contingent string
Alternatively, using escapes:
application.exe --input-file C:/Path\ with\ spaces/input.txt
Notice the backslash before the spaces in the above. This is sometimes an acceptable alternative to using quotes; but a little harder to read, and clunky to type.
Windows has a nuance too, in that both paths with forward slash and backslash are in most cases acceptable. “C:\directory” vs “C:/directory”. However, as you’ve already seen, \ is a special character for “escaping”, and can be misinterpreted in some use cases. For this reason, it’s best to use / (forward slash) in Windows paths whenever possible, instead of \ (back slash).
Typically there are three ways output of a CLI program will be relayed to the user:
- Directly to the shell (cmd prompt).
- Directly to an internally-specified filename.
- Directly to a user-specified filename you passed with a flag (such as -output).
The most common, and the one that closest follows UNIX philosophy is the first - directly to the shell. Because this gives the most flexibility to directly use the output of one program, as input into another program (called a pipe) in lockstep, and with little overhead. And it also gives flexibility as to what is known as file redirection. Both are discussed below, but first it’s important to know a few key terms:
- Standard input (stdin) : Input read from the command-line.
- Standard output (stdout) : Output displayed (printed) to the shell.
- Standard error (stderr) : Output displayed (printed) to the shell; but can be specially handled (such as redirection to an error log-file).
Pipes follow the syntax of:
Command1 | command2 | command3 | ……
They can be used to implement the standard output (stdout) of one program as standard input (stdin) to the next program. So in the example above, Command1 standard output is used as Command2 standard input, and so forth.
Pipes really emphasize the UNIX philosophy in doing one job, and doing it well. For instance,
cat sample.txt | grep -v a | sort - r
Would use the tool cat print the contents of sample.txt to stdout, which is directly input into grep (a tool to extract matching string patterns), which is then in turn passed to the tool sort - which sorts the results. 3 different tools, each with highly specific functionality; to do one job, and do it well.
It is worth noting a couple other benefits in this particular example:
- Each step does not require writing to a file, and reading it back. It is all handled in memory for faster computation.
- The successor program does not need to wait for the predecessor program to finish. The data flows like oil in a pipe.
- Only the final data transformation in the chain is output. Data extraction is via cat & grep, data transformation is via sort.
We saw above how a pipe could be used for extraction and transforming of data. However how about loading data? This is notable for those familiar with the buzzword data ETL - data extraction, transformation, and loading.
The most basic example of loading data would be writing to a file. Naturally there is a multitude of other ways to load data; in a database, in a BI tool, or into a GUI (to name a few). But at a fundamental level - writing data to a file is indeed a method of loading data.
In our example above, if we wanted to save the output of the final sort operation, it can be followed with a forward arrow and a filename. This is called file redirection.
cat sample.txt | grep -v a | sort - r > mysortedfile.txt
In the above, the final results would be stored in mysortedfile.txt. If the file already exists, it will be replaced entirely with the latest results set.
If you don’t want to replace the existing file; you can instead append to the file with a double forward arrow, ».
cat sample.txt | grep -v a | sort - r >> mysortedfile.txt
You might be thinking - but then how would you know one result set from the next? You could of course use another tool, such as date (gets the current date) to echo (relay) a timestamp between each iteration; such as the below.
echo $(date) >> mysortedfile.txt cat sample.txt | grep -v a | sort - r >> mysortedfile.txt
There are a lot you can do with file redirection, beyond the scope of this document. For instance; using something called a tee, a user can redirect data to many programs or files simultaneously. A more relatively complete list can be found here.
The benefits of CLI applications are enormous with file re-direction and pipes in mind. I personally believe where the most value is generated is in automation - referred to as RPA (robotic process automation) in corporate speak. In Windows, CLI applications can be scripted to run as part of batch (.bat) scripts. So while the initial, perhaps intimidating, appearance of figuring out command line flags time-and-time again, may be overwhelming; in execution it needs to be done once and saved in a batch file for future invocation.
In it’s simplest form, making a batch script is no harder than saving each command on a individual line in a text file and saving it as .bat extension, instead of .txt. This tells Windows to execute each line in the file as a command; when it is double clicked in Windows Explorer. Scripting is essentially a language entirely of its own, but for simple CLI execution this is all that’s required.
It’s important to remember the learning curve here may be relatively steep at first; but the benefits are exponential. Remember that the UNIX Philosophy focuses on modularity above all else, so while you may not be able to see through the forest immediately, being able to see each next step is all that’s required - and discovering each incremental step there forth, until you have solved your use case.
AT&T Tech Channel has an excellent video demonstrating the UNIX philosophy below, with Dennis Ritchie and Ken Thompson - two computing science legends.