Exclusive Interview with Bjarne Stroustrup, Inventor of the C++ programming language
A question I ask all programming language inventors is what motivated them to create a new programming language. What problem were you trying to solve when you invented C++?
Stroustrup: I wanted to write a distributed system based on Unix. For that, I needed two things from a programming language:
- low-level facilities to deal efficiently with hardware and to write fundamental system code (such as device drivers, memory managers, process schedulers)
- high-level facilities for expressing abstractions such as modules and the ways they communicate over a communications infrastructure.
No language at the time could do both well, so I grafter Simula’s class concept into C, getting “C with Classes.” C provided the low-level facilities and Simula’s class concept the basis for the abstraction mechanisms. Constructors and destructors were the key novelties. My decision to treat built-in types and user-defined types (classes) as similarly as possible was important.
Did you believe, when you started off, that it would have the impact it did in the software industry? What effect did this impact have on you as a computer scientist?
Stroustrup: No, I was just trying to solve a problem. Importantly, I decided very early on that I couldn’t predict exactly which problems I would have to solve, so I designed my language to address a large class of problems and assumed that the language would have to evolve based on feedback from use. It turned out that many people had problems for which C++ was a handy tool. Their problems were in the class of problems addressed and inspired improvements.
As a result, I became a tool builder with an interest in reliability, performance, and maintainability. I also had to become a teacher, a speaker, and a technical writer. For example, see my publication list (https://www.stroustrup.com/papers.html) or my video presentations (https://www.stroustrup.com/videos.html).
In what ways was C++ used in ways you had originally not envisioned? How did/do you feel about this?
Stroustrup: Initially, I had seen generic programming as a sub-category of data abstraction and conjectured that it could be done with macros. Well, macros don’t scale so I had to invent templates for more general abstraction. Only with concepts in C++20 is that job reasonably complete. The way Alex Stepanov designed and implemented the STL (the standard-library containers and algorithms using iterators) came as a pleasant surprise and didn’t look anything like what I and most others had expected. On the other hand, I am regularly disappointed seeing people describe and use C++ as if it was some other language. For example, writing code like they would in C or Java, not using C++’s strengths because the key facilities don’t fit their pre-conceived ideas (e.g., templates, classes that are not part of hierarchies, and exceptions).
Side note, as many of our readers are Erlang and Elixir developers: I know Joe Armstrong, Erlang co-inventor, was really keen to meet you and understand the rationale behind C++. He kept on asking friends of his who organized conferences to invite you to be interviewed, and many of us are sad it never happened (He did similar interviews with Alan Kay, Larry Wall, Guido von Rossum).
Stroustrup: met Joe Armstrong once at a conference, but there was no formal interview, just a nice chat after his talk.
What I find interesting is how .net, Java and Erlang have all evolved into ecosystems of languages focused around a common VM, tooling and libraries. The same did not happen for C++. Any thoughts on why?
C++ was deliberately designed to be just a language among many languages – not a linker, a file system, a distributed operating system, an email system, or whatever. I think that was the right decision at the time, but it has led to weaknesses in areas like build systems, package managers, and static analysis. We can hope such problems can be addressed. The initial decision was influenced by the Unix philosophy of “doing one thing well.” Often, the problems are also a result of the C++ community being so large with so many active organizations. For example, we have dozens of GUIs and graphics systems, some with more users than most programming languages.
Katelynn Burns from LaunchScout on Elixir Programming for the Backend
Katelynn Burns is a software engineer at LaunchScout Proficient in Elixir Programming and will be discussing the ins and outs of Elixir programming.
What should a python or java developer know about getting started in programming Elixir?
parts of learning a new language can be knowing where to start, but I didn’t feel that as much with Elixir. The community who use Elixir feel excited about their language and bringing in new developers and it shows. It even has a getting started guide (https://elixir-lang.org/getting-started). I think the main thing I would tell developers who are considering learning Elixir is that it’s worth it. Elixir has some of the easiest to read and elegant code I have had experience with. Elixir is enjoyable to write. It feels like a language that was written by developers for developers. I used to do some auto body work and would always complain that cars were designed with the seller in mind but never the mechanic who would have to fix it. In my experience Elixir doesn’t have that problem. It cares about creating a smooth user experience while also being nice to work with for the developers using it.
What are some use cases for the Elixir apps you develop?
Please tell us more about your presentation upcoming at CodeBeam America 2022.
I’m very excited to give my talkHow Elixir Helped Me to Love the Back-End at CodeBeam. I’m very passionate about this topic because I used to be very intimidated by the backend. Elixir has given me the resources and confidence to embrace the full stack, and I believe that’s important. I think even if someone is focused on a particular part of the stack, it’s good to at least be able to understand the full picture of what you’re working on. Elixir was a language that I hadn’t heard of before starting my apprenticeship with Launch Scout. After learning about it I can understand why the Elixir community is so excited about it, it’s a great language. I’m excited to get the opportunity to share a little bit of my journey into Elixir and hopefully help others who may be intimidated by the backend to get the confidence to start this journey for themselves, or to give mentors some insight into how to help guide new developers. I have a background in teaching, so I’m always excited by the opportunity to teach and try to help people on their journey. I feel very lucky and like I have had a very great support group so far in my engineering journey, so I’m excited for an opportunity to pass that forward as well as meet and talk to others in the Elixir community.
Paraxial.io Interview with Michael Lubas on Bot Prevention with Elixir
Michael Lubas is the founder of Paraxial.io, which helps Elixir developers stop fraudulent bot activity in their web applications. He is interested in the application of Elixir in software security.
Please tell me a bit about what Paraxial.io does.
Paraxial.io is a bot detection and prevention tool for Elixir and Phoenix applications. A bot in this context means a client communicating with your web application that is not a “real human”. For example, someone visiting your website from a mobile phone or desktop web browser is classified as a human. A bash script that sends a request to your website every five minutes is a bot.
If you operate a website, you will see good and bad bots. Most website owners want their site to be indexed by Google, for example, so they would not block Googlebot, the crawler that indexes sites for Google. A bot that’s attempting thousands of login attempts per minute using stolen credentials is a bad bot.
How does Paraxial.io compare to CloudFlare?
Cloudflare is notable for being an anti-DDoS vendor and CDN provider, they also offer a different product specifically for blocking bot traffic. Most big CDN vendors are in the position where once they get a client through the anti-DDoS service, they want to upsell them on a bot detection service.
There are problems with doing bot detection at edge, for example if the server you are protecting leaks its real IP address, the attacker can completely bypass the CDN-based detection. Another popular anti-bot measure is reCaptcha, most people online have been frustrated by having to select pictures of stop signs, so the downsides to that approach are obvious.
There are major benefits to our approach over how Cloudflare deals with this problem. Paraxial.io is installed in your Elixir application code, so you have greater control over what data you want to send to our backend, for example. This is an advantage of Paraxia.io over Cloudflare for data-privacy conscious customers.
Paraxial.io can be used with Cloudflare as well. For example, you may use their CDN service, and then Paraxial.io for bot detection. There’s no conflict at all.
Why did you choose the Phoenix web application framework for Paraxial.io?
I chose Phoenix for the backend because it’s a really fantastic way to create web applications. There’s a great harmony between the development of Paraxial.io‘s backend, and the code that our customers run, because everyone is using Elixir.
Paulo Valente on Neural Networks (AI) with Nx (Elixir/Erlang)
See Paulo Valente’s talk, at Code BEAM America 2022
What kind of use cases are good for Nx?
Nx brings numerical computing power to the BEAM, so the Nx library and its friends are great if you need this sort of application on systems that were built on the BEAM.
One such use case, which also touches on the Nerves ecosystem, would be what we call Edge Computing – a small device, like a Raspberry Pi, could run an Elixir application
through Nerves which collects data from sensors and pre-processes that with Nx before sending to a central server.
Do you have a link to the GitHub?
This is the link to the repo which contains Nx, EXLA and Torchx: http://github.com/elixir-nx/nx
In the same organization you also find Axon (https://github.com/elixir-nx/axon) and Explorer (https://github.com/elixir-nx/explorer) which bring Neural Networks and Data Frames to the game, respectively.
What should developers know when getting started building neural networks with Nx?
We can divide the needed knowledge into two categories, one which pertains to neural networks themselves, and the other related to implementing them in Elixir.
For neural networks themselves, a basic understanding of statistics and linear algebra helps a lot, although lots of people start learning those subjects because of machine learning.
For writing them in Elixir, a basic understanding of the Elixir syntax is needed, and from there one can build up by learning the basics of how to use Nx, and then start using Axon, which is where you can actually define, train and use neural networks.
Paulo Valente on Twitter Paulo Valente (@polvalente) / Twitter
Interview with Rustam Aliev about Erlang GUI Programming with Epona
Interview with Rustam Aliev | LinkedIn
Erlang actually has powerful GUI programming abilities with epona as we learn with Rustam Aliev.
Why did you need a gui library / framework for erlang?
It was a rainy night, with lightning flashing, thunder rumbling, and wolves howling in the misty plains. I was sitting in my man-cave testing yet another hypothesis for optimal register allocation for an obscure CPU architecture. Some of the results looked promising and I thought I might be able to publish a paper on them. Therefore, I needed a visualization tool with just the right amount of interactivity: you push a button – you get another distribution graph, and so on.
It was Erlang I was using for math. Odd choice? Not at all: it’s fast enough if you cook it right. But how to attach a GUI to it?..
wxErlang was my first and obvious choice. Well… I had some experience with wx, but the very idea of manually describing each window element seemed kind of revolting at the time – I needed a lot of elements for time series, and I needed some of them to be buttons, and some of them to be edit lines, and frames, and groups, and so on, all depending on internal values.
I wasn’t sure if I could easily build such an interface with wx. And I decided to whip up a prototype in Qt, which I know better. NIF? Nah, I was fed up with NIFs by then. So – a port? Why, let’s try it…
…One thing led to another; by the morning I had a working server in Qt, accepting commands through stdin and outputting events to stdout. It was time to test it with open_port/2 function. Which I did – achieving my first Qt GUI from an Erlang program.
It seemed surprisingly easy to build a GUI this way, declaratively & very straightforwardly. Somewhere between “a piece of cake” and “a no brainer” is easy.
I named it ‘Epona’ because I was replaying Ocarina of Time again back then.
I toyed with my new creation for a while – it worked. I tried implementing a Minesweeper clone – it worked. “Wow,” I thought, liking this approach very much over wx. All that’s left was to just wrap it up as a gen_server, because no matter how declarative you are, you’ll still need to contain some state and receive some messages from the GUI, and OTP approach is the best.
Finally, I had a binary system:
- a Qt-based server (qt5_eport), which you can compile and put into your $PATH;
- and under-1k-sloc Erlang gen_server module (the eponymous epona.erl), which you can run as a part of you process tree and just call epona:batch([gui elements]) to build a desired GUI.
Each component would do it’s job, and they communicate just fine – not unlike e.g. X windowing system in Unix.
So it goes.
Is erlang good for graphics programming?
I’d say Erlang is good for almost anything… but that would make me sound like a fanboy, right? So sticking to the truth, the answer really depends on what you consider “graphics programming”.
- if we’re talking of a static picture – sure, just use any graphics library bindings like ESDL2;
- if it’s an animated scene you want, it’s still easy because an animated picture is basically a series of still pictures. SDL, OpenGL, even Vulkan bindings are all there… The quality of those bindings is another topic though;
- and if you need GUI specifically, it’s really only different because any real-world GUI cannot be stateless.
To operate a GUI, you’ll need to control its state: all those buttons with pressed/unpressed/disabled status, edit lines with their contents, items selected in lists, ad infinitum. It is a complex task. No one loves complexity (unless it’s some puzzle videogame).
For the Epona, the trick was to correctly divide the complexity into two components: Erlang handles well one part of GUI complexity, and Qt does the other part. Each piece of technology is productive in its own way, and, working together, they achieve The Goal.
An added bonus for me was that I could stop caring about low-level details. Calling all those x:new() or y:get() or z:load() or whatnot… it’s too boring. Once you’re in the problem-solving mind frame, it’s usually not very comfortable to attend to those details. Let GUI server care about that, I say. (It doesn’t mean you cannot care for details at all – it means you have to care only when you really have to.)
So, back to the question – yes, Erlang can be proper good for this kind of task. For what it’s worth, people use Erlang to build compilers, to solve math problems, to implement game server back-ends and real-time chats – what’s so special about GUI, huh?
What kind of applications can you make in epona? Are they cross-platform?
Well, as long as the target platform has OTP and Qt5 support, you’re probably fine. I never get to test Epona on e.g. AtomVM, but I’m pretty sure AtomVM is intended to be used on hardware where you cannot run Qt anyway.
It can be ported to use Qt6 or, probably, Qt4. My guess is one wouldn’t even have to modify the Erlang part at all for that.
Elixir or LFE or any other Erlang-companion languages could be used with Epona. Basically, any language which has an open_port/2 compatible function is fine.
I had no need to support pictures (like PNG or SVG files), so Epona is unable to load those yet, but it should be pretty easily implementable. Lots of properties and event handlers are not covered too.
QtQuick/QML is out of consideration because that would be redundant. I haven’t tested it with right-to-left or hieroglyphic languages. No QPainter access. No QtOpenGL or QVulkanWindow or other fancy stuff. It’s doubtful one can use Epona for real-time application, unless stdin/stdout speed is enough.
Thereby – just a widget-based, cross platform GUI, that’s it.
Erlang Linters and Why Code Guidelines Matter in Software Engineering with Brujo Benavides
Learn more here: Open Inaka
Brujo Benavides is a staff engineer that works with Erlang at NextRoll. He also maintains various open source erlang projects. You can learn more about him on his about.me here. He is a member of the inaka community, authors of the erlang_guidelines repository and Elvis, among many other open-source projects.
For erlang_guidelines, is it just documentation or does it have the functionality to apply to the code? Or that Is the purpose of elvis?
It’s just documentation. It has some code (every rule has some examples of good and bad code). That code does compile and it’s generally, at least syntactically, correct. But the main purpose of the repo is to provide documentation on good practices to follow when writing Erlang code.
The actual code that validates some of the rules is in Elvis, indeed. More on that in the next answer 😉
How are erlang_guidelines and elvis related?
We created the Erlang Guidelines repo to put an end to the endless discussions in pull requests that we used to have. When we were a team of 3 or 4 devs, all working in the same office, agreeing on the proper ways of writing Erlang code was… let’s say… simpler. But as we started to grow and work remotely, all that shared knowledge was suddenly spread out across the team and it was only shared on pull requests and other code reviews. That meant that the good practices that were enforced depended a lot on who was writing the code and who was reviewing it. In turn, that led to some projects having wildly different parts that were only consistent among themselves, but not as a whole.
We quickly realized that we needed to have a place where to collectively decide how we wanted to write code as a company. And that’s how erlang_guidelines came to be. It’s a collection of all the rules that we generally enforced in our Erlang codebases, properly organized and with the reasoning behind each rule.
When we started, we thought that it was going to have… maybe… 20 rules? 30 tops!
Well, we were in for a huge surprise.
In a few months we had collected more than 60 rules, and we kept collecting them as time went by.
At some point we realized that we couldn’t possibly validate all the rules by hand on all the pull requests. We needed an automated tool to do that.
Ruby developers had Hound, so… we created Elvis (“you’re nothing but a hound dog…”, you know).
Elvis codifies and automates the process of validating as many of our guidelines as we could conceivably encode. Some rules (like use the same variable name for the same thing across all your codebase) are simply too hard (or even impossible) to codify. Those must still be validated by hand. But for the other ones (e.g. use spaces, not tabs), there is Elvis!
Why did Inaka decide to put out a set of programming guidelines for erlang to supplement the official documentation? Should other programming languages follow suit?
Well… The official documentation was poorer than it is today back then. And even today, it speaks little about code style and guidelines. So, just follow the official guidelines was and still is not even close to enough for us. It just leaves too much room open for interpretation and creativity. For other languages, there are community-approved guidelines in a centralized repository, often sanctioned by the language maintainers themselves. Would we love for the Erlang/OTP to sanction our guidelines as the official guidelines for the whole community? YES, VERY MUCH YES! Will that happen anytime soon? Definitely not. But that’s fine. They’re still the most used unofficial guidelines for the Erlang community. It’s something 😉
In a nutshell, how does elvis (erlang linter) work?
Elvis is a static analyzer that works on a per-module basis. It picks up the code for a module, gets its AST (abstract syntax tree) and runs a long list of rules over it. Each rule is independent, it has its own configuration, and it may produce its own warnings. Finally, all the warnings are collected and printed out in a clear way (In an even clearer way if, instead of using elvis -the shell script-, you call Elvis through the rebar3_lint plugin, that I also maintain). You can even create and add your own rules via a simple configuration file, called elvis.config. Developers usually add this command to their CI pipelines and so it prevents bad code from being merged in the main version of their applications.
Are there any automated documentation tools (that you’d recommend) for erlang and elixir?
I don’t know of any automated documentation, but… I’m also the maintainer of one automated specification writing tool: rebar3_typer ( https://github.com/AdRoll/rebar3_typer/ ). It’s yet another rebar3 plugin that can add specifications to all for your functions automatically.
If you were to change anything about Erlang Ecosystem Foundation documentation and erlang.org documentation what would it be?
I don’t know if I would change anything in particular. The Erlang/OTP team is doing a great job focusing on developer satisfaction during these last few years. Part of it includes a lot of progress to make documentation more accessible and understandable. We still have a very long road ahead, but I wouldn’t personally request anything in particular.
What would you do to improve erlang_guidelines, can anyone contribute to this project on GitHub?
What I would love (other than the guidelines being officialized by the OTP Team or the EEF) is for the repo to eventually be turned into a website where people can inspect the guidelines more dynamically and interactively. But that requires quite a lot of time and expertise, and I don’t have any of those 😛
Regarding contributions: YES!! Everybody can contribute… If anybody wants to promote a rule, they can open an issue (the only requirement is for the rule to have a Reasoning and Examples of good and bad practices). If anybody wants to demote a rule, they can also open an issue (this time the only requirement is the reasoning). If said issues gather enough votes… I’ll happily turn them into pull requests and, once reviewed by the other inakos, merge them into the guidelines.
If you were to design your own automated documentation system for erlang, what would that look like and its functionality be?
I’m actually not sure about this. The only thing that comes to mind would be to add doctests, like the ones Elixir has. With them you can include code in your documentation (as examples) and it gets evaluated and run so that you’re sure that it’s still working after you made changes to the production code. It’s a really really nice feature.
Big thanks to Brujo Benavides.
The Art of Technical Writing for Software Engineers (ApacheCon Asia 2022)
The Art of Technical Writing: ApacheCon Asia 2022
By Matthew Sacks
Engineers and business managers are interested in learning the basics of technical documentation and how to implement it in their organization; however, also advanced documentation engineering topics will be covered towards the end of my presentation.
About the Speaker
My name is Matthew Sacks, I am a systems administrator by trade for the past 17 years starting in 2005 as a network and systems administrator for MyLife.com (formerly reunion.com). I started as a junior network and systems administrator and was promoted to standard system administrator after one year of working in the data center and IT Operations department there. I had always done well in school for English exams and essays and had a knack for writing, so I took the skills I learned on the job and my curiosity for finding new software technologies and wrote my first published technical article on Splunk 1.0 for Sys Admin Magazine back in 2005.
I created documentation systems and implemented other technologies for the development team to use to collaborate more and faster, reducing the time it took to resolve issues and bugs in the code, as well as plan and specify new technologies to be implemented, and it always started with a proposal document or outline of what was to be implemented. I also contributed and helped start the USENIX Blog Team, which I believe is still in operation today (USENIX now has a robust regular blog), and contributed to other publications such as Linux Pro Magazine, ADMIN Magazine, InformIT, and other reputable online publications.
From there I worked in various Internet-based companies around Los Angeles such as Edmunds.com and Dun and Bradstreet. After working at Edmunds between that and Dun and Bradstreet, I wrote my first book on Web Development Operations (Web DevOps) which was published by Apress about how to function in an Operations team and reduce barriers to better work with development teams as an operations professional.
What you Will Learn
In this presentation, you will learn how to architecture software documentation, create documentation artfully, and design a documentation infrastructure.
In my experiences as a systems administrator, I learned early on that having good technical documentation for the applications, systems, and networks that were being supported had a lot to do with the outcome of the success of the production support and engineering design of the software behind of all the organizations I worked at over the years.
In this presentation, you will learn how to set up a document and documentation infrastructure and the basics of technical writing, as well as more advanced topics such as document automation and product design specifications.
Re: Why does art matter in technical or engineering writing?
Art matters in engineering because in the engineering and design process, you have to start with some kind of sketch or base framework before you start engineering, designing, and building any technology or software. Components of the whole document or product being built are inspired through artistic expression, and even the absence of art from a design or engineering document is a form of expression in and of itself. For example, the font I chose for this Word document where I am describing the elements of technical writing arts is called Calibri, which is a default Microsoft Office font. It is similar to Arial and doesn’t have much detail or style but it is simple and clean in design and easy to read; therefore, by choosing a certain font style, I have decided how readable my document is. This is where art comes into engineering and design and how it affects the way things are built.
How can technical writing be an art form?
Not only is technical writing an art form like poetry, stories, movie scripts, songs, or a new programming language or voice spoken language (linguistics), but it is an art that allows you to create functionality in life that previously was unavailable. Think about the Uber Eats app, when you go to open the app there is text to tell you that it is the Uber Eats app you are using. Next, there is the graphical user interface which allows you to touch and place orders and type your payment and address information to the app. Then there are the backend APIs at Uber Technologies Inc, that dispatch the request to a driver’s mobile app on the field.
Every interaction that created this new functionality and mobile application that is the consumer-facing side of the technical infrastructure that supports and makes what Uber Eats is, is some kind of art form in the expression of functional code, graphics, and sound.
Before Uber Eats could be created though, there had to be some kind of design email or technical document produced that defined the patents and technologies, or the recipe for the code that had to be produced before you could order your burger delivered to your doorstep on your iPhone or Android phone.
Why do communication styles and artistic elements matter in terms of technical documentation?
Documentation is an art form because the way you write and arrange words and letters communicates differently when expressing complicated ideas such as the structures and compositions of code that make up software, and documentation is what supports the process of building the software.
You can write code without documentation, but without any structural reference to how the code was created, managed, and maintained. You may have difficulty performing those tasks with your code base.
Each document and technical author will have their own style of writing and documenting software technologies, but when there are agreed-upon principles in an organization of how documentation is intended to be composed then it will make the code easier to produce and maintain.
The Basics of Technical Writing
The basics of technical writing include how to write API documents such as specifications, runbooks (how to operate your software), getting started guides, and being able to describe technical solutions in a format that makes sense to multiple audiences. The audiences you are writing for are software engineers, project managers, operations, and business/executive audiences when writing any technical document.
To write for multiple audiences it is important to structure every document with the following outline:
Intended audience: Here you describe what type of role the reader reading should be and what they would expect to receive from the document they are reading or implementing instructions from.
Almost any technical document should have the following features:
- Purpose of Document – what are you going to gain from reading the document?
- Intended reader – always write for multiple audiences depending on which types of audiences you believe will be reading and benefit from obtaining this information.
- Date and metadata (author, tags, etc) – Being able to be searched and organized by categories.
If your document does not contain these three things it may become difficult for wikis and indexing systems to categorize them properly and without a statement of purpose and description of who is to read the document it may be unclear as to why the document was created and to whom it is intended to address.
Some of the key points to address when writing technical documentation are as follows:
- Ensuring the document is easy to find.
- Ensuring it is well written, clear, and has a clear goal or outcome after reading it.
- Address assumptions of the reader (assess and let them know the intended skill level or coding level of the reader reading the document)
- Make it easy to find related content (recommendation engines)
- Writing for multiple audiences and addressing all of the intended readers.
(thanks Jarek Potiuk for your input)
Documentation and Aesthetics
Art is about visual communication, when you look at a DaVinci, Cezanne, or Matisse, each painting communicates a different emotion and/or visual message. If there is nothing else you learn from this talk please note this section as the most important message in my speech. When you study aesthetics and design as a writer, you will improve your technical writing skill. As you improve your writing skill, you will grow as an artist.
Every document has its own aesthetic style. The style which you discuss, design, and create will have a positive effect, if done correctly, throughout the lifetime of your organization.
Common Documentation Systems (Wikis)
- Wiki.js (or js.wiki)
The documentation system I first used when I was a technical writer was actually Microsoft Sharepoint, I believe. It had just been recently released in 2005 and it came with a lot of features for documentation infrastructure. Later on, I found that Confluence was my favorite option and integrated well with the other Atlassian Bug tracking and software engineering suite of tools. You could reference JIRA ticket numbers within your documentation and it would automatically create a link to the JIRA issue being referenced. I would recommend Confluence and the Atlassian suite of products as the most common and robust documentation infrastructure available for software teams, currently. Still, certainly, there are many open source and free alternative solutions for documentation infrastructure.
Setting up a Documentation Infrastructure for your Organization
- Identify which documentation system you want to use for your primary content or wiki. Do you want a paid solution, cloud solution, onsite, wiki, and bug tracking integration? You need to meet with your engineering team and come up with a wish list of what you want your documentation system to do better than the existing one, or if you are just starting out, which goals you want the documentation system to accomplish. I recommend Confluence or JS.Wiki in most cases.
- Define the minimally required information you need in all documents. There should be a defined structure of your documentation style and structure (through the use of templates or otherwise) that all members of an organization are required to follow. The intended reader, the purpose of the document, and metadata are all necessary minimal components of any documentation style definition in any organization.
- Create Templates. Most documentation systems and wikis employ the use of templates. You can create a base template and integrate your organization’s defined documentation styles and minimum requirements into these templates for members of the organization to use and fill out the information so that your documentation maintains a consistent style throughout the organization.
- Employ a technical writing editorial team. Whether you have dedicated technical writers on staff or appoint leads in charge of documentation efforts you should have an editorial review team and process defined such that when new documentation is released it is vetted and checked for errors and accuracy as you are operating potentially life-critical systems based on the documentation being used to power crucial software systems.
Architecture Design Diagrams
Architecture and infrastructure design diagrams are crucial components of building and maintaining any software infrastructure. Whether it be a mobile app, an application server design specification, or a new web application or networking technology there must be some architectural design structure for how the code will be assembled and built from a high-level view. When you build a building, you must have some kind of established vision before you write a blueprint or build a three-dimensional model of the building you have in mind. It starts with a sketch. A graphic design. Graphic designers are a crucial element of information technology and software design. You will have to have some skill in assembling or drawing design diagrams in software technical writing.
Product Design Specification
In this documentation example, we will cover how to build a software product from scratch; in this case a mobile application. Any software product will need to be specified in the documentation before it is built. Before you write any code or make any design there needs to be a document describing how the software or in this case a mobile app, will be built.
This is a high-level document, each component of this document may have a supporting document to further detail the various components of a new software product design. For example, backend infrastructure specifications here will be detailed at a high level; however, when it comes time to build the actual product it will have to be detailed with a supporting backend software architecture diagram.
Here we define how the User Interface should be designed so that the design team can build the graphical user interface for the application or mobile application being created.
Here we define how the backend architecture will be built to support the web, mobile, or desktop application. This may include but is not limited to, servers, serverless containers, microservices, other types of containers, backend APIs, and services that support the customer-facing, or user-facing application interface.
Many technologies now utilize some kind of cloud computing platform. It’s important to indicate which cloud provider will be used, and the associated documents or sub-documents can detail how the cloud computing infrastructure will support your application infrastructure.
The cloud architecture will resemble backend infrastructure documentation and diagrams, but it is noted in the product design specification because many technologies now utilize some kind of cloud computing platform, so it is important to notate which cloud provider will be utilized and create associated or sub-documents that detail how the cloud computing infrastructure will be utilized to support your application infrastructure.
Every web, mobile, or desktop application has some way of interacting with and operating it; thus, when building a specification for this kind of application there needs to be a defined method of operating it in the product design specification.
The methods of operating your application may be, but are not limited to: voice command, keyboard input, and touch input or buttons/controls. This part of how the User Interface is going to be utilized is necessary to be well defined before building any application and may include user interaction mockup diagrams.
Monitoring and analytics:
Every application should have some defined method or API for gaining insights into how the application is being used by users and administrators. There need to be feedback mechanisms such as Google Analytics and/or Hotjar for Web apps, and Flurry Analytics or similar solutions for gaining insights into how the Mobile Application is being used once deployed to test or production.
Artificial Intelligence/Machine Learning Components:
This is not recently a necessary requirement of product design specifications; however, now with the growing advent of some kind of artificial intelligence or machine learning capability made by API services or libraries or by other means, it is now crucial to define what types and how the artificial intelligence may be used in your application and should be an integral component of all software documentation product designs going forward.
Creating diagrams is a crucial element in writing technical documentation for software. Some of the common tools you may use to draw documentation diagrams are Microsoft Visio, Adobe Illustrator, Adobe Photoshop, or even hand-drawn. We will briefly cover some of these tools and how they are used to create a network, system, software, and security architecture diagrams.
1) Visio – Microsoft Visio is an excellent tool for creating infrastructure and software architecture design diagrams as well as network architecture diagrams. Any diagram that relates to system or software infrastructure can be created using pre-made stencils for most platforms, operating systems, and network types. For example, AWS has its own set of stencils and Cisco has a set of stencils for networking that make it easy to draw and diagram your infrastructure.
2) Illustrator – Adobe Illustrator is more of an art tool used for creating illustrations digitally from scratch but if you know how to draw in Illustrator you can make custom diagrams as well with a high level of detail.
3) Balsamiq mockups is a cloud and desktop-based software that I find very useful for creating wireframes for building the structure and layout of web applications and mobile applications, but could also potentially be used for systems and networks as well.
The Developer Interview Format
In my work at the Bitsource, which was a software engineering blog I created back around the 2010s and traveled to many conferences around the country interviewing various software pioneers, I learned that the developer interview format is an effective way to discover information about the software that might not have any documentation in the first place. The discovery process is something that allows you to gain insights into how the existing code or software architecture is composed, and interviewing the developer or developers behind the code for your software can reveal insights into the inner workings and aspects of the code that might not be addressed in traditional documentation; thus, it may be crucial when releasing or redesigning software to conduct some kind of interview and get all of the questions out on the table for all of the different skill levels of the readers reading the interview. Once the interview is produced, oftentimes it will reveal new types of documentation that needs to be created to support your software and act as a reference point for more traditional types of documentation associated with the software being discussed.
Feedback and Editorial Review
Every document should be reviewed by a professional technical editor or developer editor. With any first draft, there are usually grammatical issues, errors in code, and other problems that will prevent the document from being useful and error-free. Many technical editors can be outsourced on websites such as Upwork and Fiverr but depending on the needs of your organization it may be wise to employ an entire technical editorial team to help with documentation efforts in your organization as the role of a technical writer, editor, and manager is becoming more prevalent in all organizations IT departments.
Automated Documentation Systems
I gave a talk at a USENIX LISA Conference about 10 years ago (the talk slides are no longer available online) about the need for automated documentation systems. At the time the only automated documentation systems were specifically for generating API docs without much comments or explanation using Javadoc or pydoc for Python code. Now, there are many automated documentation systems for many different programming languages and platforms, and there are some automated documentation features now appearing in IDEs and Orchestration systems.
I looked online and tested a few free trials for automated documentation systems and here is what I discovered. I cannot say I have fully evaluated any of these services or systems but I can say that they are worth further looking into for documentation automation systems.
This system uses markdown files, openapi files, github data, and other types of Wikis to update and generate documentation.
Documentation Automation for Teams | Light AI — Light Docs
Technical Documentation includes many aspects of how art is employed to create software technology by defining the building blocks of how the code is to be designed and built from the idea to the solution to being turned into actual code. The way your documentation systems and content are arranged and disseminated has a great impact on the success of your product roadmap and code quality.
Interview with Francesco Cesarini of Erlang Solutions on Erlang and Elixir Use Cases
Why are Erlang and Elixir such powerful languages?
Erlang was invented to solve a problem. You need to put this in contrast with programming languages in search of a problem. You can not go out, invent a language and then figure out what they are good for. When inventing a language, you need to have a problem domain or use case in mind and by prototyping in iterative processes, reduce the pain points felt by developers working on that problem. This is what makes Erlang so powerful.
Elixir has just turned ten. Despite there being over 30 languages running in the Erlang Ecosystem, it has steadily been gaining traction and is today the most popular one. It differs to Erlang because of the tooling, which encourages a top-down development approach. The package manager encourages reuse of modules and libraries together with frameworks focused on web development and APIs for mobile applications. Its ruby-like syntax induces a sense of familiarity among many of its developers coming from a Ruby background and top class documentation all reduce the barrier to entry and ensure productivity from day one. What has happened with Elixir is that they have taken the power of Erlang, and made it even more powerful by providing a development environment and documentation standards a large group of developers are familiar with.
What are the most common use cases for the Elixir language?
The most common use case, focused on the Phoenix framework, opens the door to developing APIs, mobile backends and websites. Everyone in the ecosystem is using Phoenix for their front-ends. Bleacher Report was one of the first companies to lead the way, migrating their Ruby stack to Elixir, and reducing their hardware footprint by 90%. It would have been more had they not needed extra hardware for redundancy. The Nerves framework opened the door to embedded developers, providing a platform to build, deploy and manage systems which run and control these devices. Farmbot is a great example of a Nerves application, allowing users to engage with precision farming whilst reducing and optimizing resource consumption. Check out their really cool youtube videos if you want to learn more. And finally, keep an eye on the Machine Learning space in Elixir. The components to build Machine Learning frameworks are being put in place, and when paired up with the BEAM Virtual Machine and Nerves, they will be able run in Edge networks and IoT devices. Whilst early days, follow the development of the Axon ML framework on github, as use cases will be announced soon.
Why is Elixir Erlang based?
This goes back to the problem Jose Valim was trying to solve when he created Elixir. His purpose was wanting to bring the power of Erlang and the BEAM to other communities of programmers, starting with the Web, but then moving on to other domains, including embedded and machine learning.
What are the most common Erlang use cases?
Erlang was created to program the next generation of telecommunication switches, at a time when telecom markets were being deregulated, monopolies broken up and privatized and networks digitalised, merging fixed and mobile voice telephony, media and internet into the same infrastructure. That meant that systems Erlang was created for had to be scalable, resilient and handle peak loads predictably whilst keeping support and maintenance costs under control. When the internet came along, pretty much any online system had to display those features. A website would have to predictably go from handling a few page impressions per second to thousands when a page got posted on a popular site, an SMS aggregator would have to deal with an influx of millions of SMSes during TV competition votes or a stock exchange would have to handle tens of thousands of simultaneous buy and sell orders during turbulent markets. Today, we see Erlang (and languages in the Erlang ecosystem) being used in mobile app and website backend development, bank switches, messaging solutions, IP router control systems, 5G network exchanges, blockchain and crypto, massively multiplayer online games, ecommerce solutions, online advertising and video streaming. The list could go on, but I hope you get the picture. They are all systems, which whilst doing very different things, do so having to handle the load reliably, predictably and without failing, and without costing a fortune in maintenance or infrastructure. All of these properties are by default inherited by Elixir.
What is the development time for an erlang based app vs. elixir?
It depends on the problem you want to solve and the frameworks you use. Elixir tends to have more components and frameworks, and as such, more dependencies. You pull them in through the hex package manager, and whilst doing so, lose the flexibility and control on how things are done, but achieve time to market. In Erlang, you tend to work from the ground up, solving complex algorithms and problems, and if successful, glue them together. This gives you more flexibility on your final architecture, and more control, but takes longer.
What is the runtime performance of an elixir based app versus pure erlang?
Elixir is compiled to Erlang, so performance is not an issue. You choose Erlang and Elixir not for speed of execution, but for speed of development. That said, user friendly interfaces towards libraries written in other languages(often C or Rust) where you can offload CPU and GPU intensive computations are being developed (Look at Numerical Elixir https://github.com/elixir-nx/nx), or JIT compiler allowed WhatsApp to reduce its hardware needs by 25% https://twitter.com/wcathcart/status/1385253969522413568 are all making the ecosystem faster. Fast enough these days to start running machine learning algorithms.
What are the machine learning use cases for elixir?
With Axon, I believe they will be targeting similar use cases as PyTorch, but doing so with the accessibility and ease of use you find in Elixir. Think of computer vision and natural language processing, but doing so at the source of your data, in an easy to use framework. This means you do not have the overheads of transporting your data, and can easily run your training algorithms or apply your models in the embedded devices or Edge networks themselves. This starts making sense now that we are gathering more data every year than all previous years put together. It is getting expensive to move this data, so instead, move the compute to the data. Give the community six months, and many of the early adopters will hopefully start talking about their case studies. Right now, it is still too early, as the frameworks and libraries are still being developed.
Can you build api’s and web apps with elixir? Is it good for mobile apps for backend?
Yes, absolutely. These were some of the first use cases, targeted by the Phoenix framework. Phoenix is seen by many as the next generation of Ruby on Rails, with a difference that it scales. WhatsApp achieved two million TCP/IP connections on a single (modified) BEAM VM running on (an also modified) Free BSD in January 2012 (https://blog.whatsapp.com/1-million-is-so-2011). The changes made their way to mainstream OSes and to the BEAM, allowing a Phoenix instance to run with two million simultaneously open Web Sockets on AWS in 2015 (https://phoenixframework.org/blog/the-road-to-2-million-websocket-connections). Phoenix would have scaled further, but what stopped it was the network limitations imposed on them by AWS, as the instance they were using was not maxed out.
Frontend for mobile apps?
No, Elixir is a language for backend. But interesting things are happening in the front-end space with Phoenix LiveView. It provides server-side rendered HTML. Those who have been around a while will see it as history repeating itself. And those who are new to programming will love it, as it allows them to work with front-end in the backend using a language they enjoy working with, with all of the safeguards.