Written 12 August 2023
I lived in Afghanistan for nearly 2 years for two separate US Army deployments. I write this just over 14 years after first stepping foot in Afghanistan.
On the first Afghanistan deployment I was constantly traveling, and travel in Afghanistan was rough. No soldier was authorized to travel between based by means other than air unless a planned mission dictated otherwise. Unless you held VIP status all air travel was space available status. VIP included special forces soldiers and grades: O6, E9, and CW5.
When I first started this tour everything about traveling was exhausting. Carrying all that weight and walking long distances to find housing was exhausting. Sleeping in strange and rough environments while holding onto my secure SIPR equipment was exhausting. Packing all my things in the early morning to catch a flight roll call was exhausting. Waiting at the air terminal for hours with all that equipment at various times of the day was exhausting. There was one time I had to relay between bases to get to my destination and wasn't able to go to the bathroom for 22 hours. But it only took about 3 or 4 of these travel missions for that exhaustion to go away. This simply became the pattern of life and everything always felt so new and unknown that the first of this deployment was constantly thrilling and exciting.
On the larger bases you had to set proper expectations that were going to be at the bottom of a list containing up to 300 names, and they maintained names on the travel list for up to 10 days. So we frequently attempted to plan our travel missions before returning from the prior mission so that on return to Bagram we could put our names onto the next travel list before leaving the air terminal for our b-hut. Then that way we could reset for about 3 days and let the list cook off. We also showed up for roll call around 0200-0400 in the morning hoping there would be less competition to attain a seat for a flight. I remember a signal unit at Bagram kept token sergeant majors around so that if a technician needed to fly out to perform an emergency repair on an important piece of communications equipment they would achieve they would receive immediate VIP status by traveling with a sergeant major (E9).
My typical travel load included a ruck sack on my backpack with up to two weeks of clothes and toiletries, a backpack on my chest with 3 ToughBook laptops, my 9mm pistol, spare ammunition/magazines, my kevlar helmet, body armor with the ceramic plates, and various nicknacks in my pockets like: gloves, hand sanitizer, important papers, a flashlight, a reflective belt, and other things. I discovered my load weighed about 120 pounds when I had to weigh it separate at the NATO air terminal in Kandahar.
Most travel in Afghanistan occurred with military pilots and military aircraft. For transporting people this typically meant a C-17 when entering/leaving the country and C-130s within the country. For rotary aircraft I only saw Chinooks for transporting people. There were also civilian pilots flying civilian aircraft to transport people as well.
I remember one time flying back to Bagram from Jalalabad which was probably only 50 miles as the bird flies and its a straight shot traversing a large river valley. These civilian pilots wanted to take the scenic route, so instead they flew into this narrow extremely deep canyon. This canyon might only have been three times the wing span of this small civilian plane. Every time a river system fed into this canyon from one the canyon walls there was an instant air lift the plane and then plane when then immediately drop to its prior elevation. I red in the Michael Chriton book 'Airframe' this was called porpoising and its the only time in my life I felt motion sickness, but what a rush. Pilots could never get away with playing passengers like that in North America.
I can remember another time when the military was transporting some captured Taliban VIPs from Kabul to Bagram for interviews on a C-130. First know that Bagram is a slow 45 minute drive from Kabul in tactical vehicles, so the flight including liftoff and landing is really only about 20 minutes. I just happened to make this flight as a passenger. I have no idea what the intention of the pilots were, maybe they were trying to scare these Taliban leaders, but these pilots were attempting advanced acrobatics over the valley between Kabul and Bagram. It was both thrilling and weird, because keep in mind that a C-130 is basically a school bus with wings in the air. At any rate it was exciting nonetheless and these Taliban leaders were super impressed and thanked the Americans for the excitement. The entire event struck me as exceptionally bizarre, but I am glad everyone was safe and had fun.
I first entered Afghanistan in July 2009 and moved into the Dragon housing neighborhood. At this time on Bagram Airbase almost everybody lived and worked on the western side of the base bisected by a single roadway named Disney Drive after an army specialist who died in country. The eastern side of base was not well established the first offices and housing would emerge there around summer of 2010. The western side of Disney Drive was mostly housing neighborhoods but also contained a special forces compound, an intelligence SCIF (secure compartmentalized intelligence facility), a robotics lab, and some other things. The eastern side of Disney Drive contained some new office buildings, aircraft hangers, the fixed wing terminal, the rotary wing terminal, and more work related things.
The Dragon neighborhood was near center of the base from north to south and was just down a road from the fixed wing air terminal. This was convenient for travel single I had to carry so much gear. The rotary wing terminal was almost a mile to the south. I only used the rotary terminal once. I can remember playing video games early in the morning instead of getting ready for travel, so I left the b-hut late and kept thinking I could make up the time by jogging. That was the closest I have ever been to a heart attack, attempting to jog nearly a mile with my complete travel load of nearly 120 pounds.
The western side of base was immediately adjacent to civilian farmland and housing neighborhoods. I can remember a civilian housing neighborhood immediately close to the north side of base near a small creek. Occasionally teenage boys would try to sneak onto base there for thrills. As I was leaving country in 2010 a small group of Taliban, maybe 2 dozen, attempted to storm the base from just south of that civilian housing neighborhood into the military housing area named Cherry Beasley. The Taliban initiated with a grenade that severe injured a soldiers leg, but they were otherwise eliminated as they attempted to climb the perimeter fence. At this time there were nearly 27,000 people living on the base more than half of which were soldiers always armed with assault rifles.
Most of the housing comprised these wooden shacks made out of sheets of particle board called, b-huts. Each b-hut typically comprised a central hallway with 4 rooms on each side, so a full building could house 8 people each in their own rooms. These rooms reminded me of small private college dorms due to the space and furniture. The layout was quite cozy, except they were incredibly dusty. Northern Afghanistan was infested with some kind of tiny wood eating insects that would leave pinholes in the walls, ceilings, and furniture. It was common to wake up with a fresh layer of saw dust on your desk and bed or see little poofs of dust falling from the ceiling.
The air quality at Bagram was stunning, possibly due to the 4200 foot elevation. I was diagnosed with Asthma only a year prior yet experienced no Asthma symptoms of any kind for the first time in my life. It felt amazing. It was also the dryest air I had ever experienced, which also felt incredible. I can remember some days looking out to the horizon towards the north and seeing the layers of mountains stretching possibly more than 50 miles to the horizon due to the combination of dry air and vertical sharpness of the mountains. For example on a cruise ship on the ocean you often cannot exactly make out the horizon because after a while everything gets blurry due to natural humidity. Northern Afghanistan was the only time I could look to the horizon and see almost forever unobscured by any kind of atmospheric distortion.
I did not grow up near mountains, so anytime I see mountains in North America I find them stunning. Bagram existed in a valley immediately between Kabul to the south and the Karakorum mountain range to the north and west. The Karakorum shot out of the ground with an immediate sharpness unlike anything I have heard of in North America. The first layer out mountains on the western side started maybe 10-12 miles away from base but crested about a mile in elevation higher than the valley floor. It is hard to describe what that's like. Its like looking up from the horizon towards the sky, but a significant portion of sky was just gone.
In November 2009 American television broadcaster Fox chose to televise its NFL pregame show live from Bagram Airbase and I was selected to participate on the show. I also remember dropping a super easy football pass from Terry Bradshaw live on national television. I grew up knowing that Terry Bradshaw retired to Southlake, TX where I grew up so its strange that I had to be on the other side of the planet to meet him.
Kandahar Airfield is everything that Bagram isn't: humid, hot, dusty, and miserable. It was also the largest military base in Afghanistan having up to 50,000 residents during the time of the troop surge in later 2010.
I remember developing combat fatigue near the end of my first Afghanistan deployment after getting stuck living in the transient tent awaiting travel back to Bagram. The transient tent was this enormous 500 person tent that was never cleaned because it was always full of transient soldiers coming and going from Kandahar, so it smelled of dirty foot with great intensity with an amazing amount of dust on everything. This transient tent was also located a short walk away from poo pond, a giant sewage cesspool with a surface area of nearly 4 acres. I remember becoming exhausted by the foot smell only to go outside knowing the air smelled strongly of raw sewage. Since this was a giant circus tent filled with strangers coming and going you had to constantly guard your personal electronic in hopes that it didn't walk away. Unsurprisingly, nobody wanted my weapon. I remember abandoning my weapon into my sleeping back in order to shower because I had no place to secure it.
I lived on Kandahar permanently for my second Afghanistan tour. I was living in a CHU (containerized housing unit) at the far corner of the central CHU yard opposite poo pond, which was about half a mile from poo pond. You could still smell it every time you went outside, but it wasn't like you were swimming in it.
When I first arrived in Kandahar for my second Afghanistan tour, January 2013, there existed a dark unlit alley separating the CHU yard from the sustainment private compound containing our offices. The first thing I was told was that walking down the dark alley at night was expressly forbidden by our commanding general. This alley was known as rape alley, because of a high occurrence of sexual assaults. It wasn't long after that a female soldier living at one corner of the CHU yard stepped outside to smoke. Somebody wearing civilian clothing distracted her while a second person clocked her in the back of the head with something like a body armor plate. This female soldier blacked out from a head injury and these men drug her into a nearby construction pipe in an attempt to rape her. She wasn't wearing much clothing as she was dressed for bed, and so these men did not think to search her for weapons. She pulled out her knife and almost cut one those men's calf muscle off. She then walked to the hospital for treatment. Almost immediately after this event numerous bright lights were established along rape alley and sexual assaults in that location were eliminated.
Strangely, the greatest social problem at Kandahar Airfield was unemployment. Much of the manual labor were contractors brought in from third world nations by foreign contract companies. When these companies lost their contracts they simply abandoned their employees, who then became stranded to base in an unforgiving combat nation with no modern infrastructure. I have no idea how this problem was solved, but I do remember security events where people were told to shelter in their offices while security forces walked around rounded up unemployed persons.
FOB Phoenix was on the outer edge of Kabul. You could reach it by chopper from Bagram on the 'black' chopper ring, but your best bet was to fly into Kabul International Airport. One side of the airport also served as the Afghanistan NATO mission headquarters and featured amazing European food in the dining facilities. Everything in Kabul other than FOB Phoenix required flying into the airport and then driving through the city. New Kabul Compound (NKC) required driving to the large traffic circle downtown and then performing a U-turn to drive the wrong way on the freeway you just drove through to get to the compound. There was also a large hill full of residential houses overlooking one side of NKC. That side of the building featured a warning sign to run due to snipers, which was a bad sarcastic joke.
Driving through Kabul felt strange because as you are driving into town you pass a lot rubble. Many of these buildings were destroyed by the civil war that occurred in the early 90s. The civil war ended when the Taliban firmly took control and attempted to eliminate the militias. Once you get closer to downtown there were some nice office buildings and a lot of commercial investment, but when you were looking at the people you were always seeing poverty.
The geography of Kabul is unique, which explains why historically it has been nearly impossible to conquer. Kabul is an elevated bowl, where it is a valley surrounded by a circular ridge of mountains like a large meteor crater but the valley floor of the city is still a higher elevation that the land outside the circular mountains. This means all air pollution generated in the city remains trapped in the city. In the winter I remember it looking like a tiny white fog blowing all around you but smelling like some combination of burnt metals and plastics. I can remember one time when the pollution was so bad that it overflowed the containing circular mountains and wafted into Bagram.
Camp Eggers was a small square of downtown Kabul that was fenced off and became a military base. It was actually located on the same traffic circle as the US embassy, New Kabul Compound, and ISAF headquarters. Each of those were separate bases with separate security, by the way. The base contained beautiful houses with yards and tall trees that became offices. Civilian office buildings were converted into soldier barracks. There beautiful grapevines growing through out base and large beautiful flower gardens. The dense array of buildings, containers converted into buildings, and winding passages in between felt very urban which is very strange for a military base. I really member thinking this was the most beautiful place I saw in Afghanistan.
I happened to be at Camp Eggers when the car bomb exploded in front of the Indian embassy. This was some distance away from the American compounds and I was in no danger, but I can still remember just how loud it was and feeling the ground shake. I could tell from the sound versus the ground vibrations, and that nobody on the compound was injured, that it wasn't nearby yet it still felt a bit erie. You know somebody somewhere was hurt at that very moment.
FOB Sharanah was the high elevation forward operating base in Afghanistan at around 7800 feet elevation. In the extreme distance near the horizon you could barely make out the presence of distant mountains, but this base was in a flat valley with nothing around.
I remember that FOB Salerno was at a lower elevation, maybe only 1200 feet, maybe less. I also remember taking a helicopter ride directly from FOB Salerno to FOB Sharanah climbing over 6,000 feet vertically. If felt like the helicopter moving straight up into the air vertically and yet we were never far from the ground. The natural scenery was amazing to watch. I also remember taking my first few steps in FOB Sharanah with all my gear on and feeling immediately short of breath. Even with my gear off walking up a small hill felt exhausting like running 2 miles for speed and needing to stop and take a break from a short walk.
FOB Sharanah was a blackout FOB, which means at night there were no light sources of any kind. This is the first time I have seen an airplane fly into and land in complete darkness. The landing strip was lit with infrared light emitters to guide the aircraft. Because the airplane has no windows you didn't think about it until you exited the plane and the planes doors closed behind you at which point you couldn't find your way off the landing strip without a guide or flashlight.
FOB Salerno contained an orange orchard and civilian restaurant serving authentic Afghani food that was extremely popular on base. I can remember many trees on base and a large hill in the middle of base containing a giant piece of artillery. They would occasionally shoot the cannon in the middle of the day just to remind the nearby Taliban forces to leave the base alone. Still I remember medical evacuation choppers frequently bringing soldiers back to base for emergency treatment from combat related injuries.
I encountered a young female bone surgeon on FOB Salerno that I met earlier during my pre-mobilization time at Fort Hood. I didn't realize at that time what it means to be a bone surgeon in a combat related surgical ward. The LT that was showing me around FOB Ghazni described in detail his encounter assisting this doctor with an emergency procedure. For shattered bones she would have to slice open the appendage enough to slide one hand between the muscle and skin feeling for shattered fragments to either extract or if possible bind back together. It is a delicate and thorough process because all shattered pieces of bone must be accounted for as loose bone pieces will continue to internally cut up tissue resulting in infections most serious concerns.
I visited Jalalabad once for sure, maybe twice but I cannot remember any more. I don't remember much about it except that it was this beautiful agricultural valley at only about 1200 feet elevation. It was also extremely hot and humid compared to Bagram despite being only about 50 miles to the east. I also remember that the sitting water with increase heat and humidity meant mosquitos, which did not seem to exist at Bagram, and mosquitos meant malaria. They were assigning article 15s to people who contracted malaria and were proven to not taking the assigned malaria antibiotics daily, doxycycline. I stopped taking the doxy because it made me feel nauseous, so I came home with most of mine.
FOB Ghazni was run jointly by Americans and Polish. I remember visiting Ghazni once near the beginning of my first Afghanistan tour and then again near the end of that tour. I was told the Polish military was having trouble making officer quotas so they were frocking officers to higher grades and shipping them out to Ghazni without proper preparation. The result was poor leadership and a bunch of soldiers doing bizarre things with destroyed morale. Many Polish soldiers were also running their own distilleries on post and getting drunk. Please bear in mind the soldiers at FOB Ghazni were seeing a lot of combat action outside the wire and they were not handling it well. When I came back months later the base looked like a completely different place. The Poles sent in an experienced brigadier general to get things under control and those Polish soldiers looked sharp. The new Polish leadership had really turned things around.
FOB Ghazni also had to contend with poverty problems. One of these problems involved wild dogs wondering onto base from the nearby Ghazni city. During my first visit to Ghazni there was a big problem with drunk Polish soldiers feeds the wild dogs as pets. This is severe problem because when those Polish soldiers cycle out the dogs are still there and become a security problem, which meant American soldiers had to round up these wild dogs near post drive them a short distance away and shoot them. This was really hard on morale, and really made people angry
There was another incident when parents walked their two children to the base for medical treatment. Their children turned yellow from advanced jaundice and the parents didn't know what to do. The young doctor who was herself an Afghani felt sorry for these children and she provided them with antibiotic shots to reduce liver inflammation. The next day 30 families showed up to the base with their children seeking treatment. They had to be turned away because the base didn't have enough medicine to support the civilian population.
I first started writing web code as a high school student in the 90s and continued to explore the creative potential of web design until I graduated from college. At this time in 2006 I was interested in visual design and did not know how to program. I had been writing HTML for years but was learning the relevance of semantic text description and learning to write CSS for the first time.
Shortly after graduating from college I joined a small marketing firm named Digital Alchemy. Digital Alchemy purchased several years ago and the brand no longer exists. The work there primary involved embedding HTML into email to impose branding and visual appeal for high end hotels and resorts. This turned out to be extremely challenging as email is a completely unforgiving medium. You had to learn to write the most durable conventions possible to prevent your creative designs from falling apart in the wild. At this point I my confidence with CSS greatly increased.
In fall of 2007 I joined Travelocity as a web designer. I didn't know it at the time but I took a written code test as part of the initial interview process and aced that test. After doing almost no work as a designer for my first 3 months at Travelocity they involuntarily reassigned me to a frontend developer position. I didn't know how to program at all. I just had to figure it out primarily in JavaScript but also a little bit in TCL and Perl. From early spring 2008 until summer 2009 I struggled to figure it and occasionally broke things in production. I earned a negative annual review that year, which is the kind of excuse an employer needs to fire people, but good front end developers are hard to find. My job remained secure through two major layoffs.
From summer 2009 until late summer 2010 I separated from Travelocity for a military deployment to Afghanistan. During that time away I invested an enormous amount of energy in teaching myself to become a better programmer by devoting much of my free time to my first open source application: Pretty Diff.
By the time I came back to Travelocity my confidence in writing logic for websites dramatically increased. After a few months they promoted me to a senior developer and reassigned me to become the company A/B test and experiment engineer. This was hands down the most fun I ever had as a software developer. The work was fast, high risk, and more challenging than anything I had done before. Injecting external libraries into the experiment code proved to be too slow and fragile often breaking the site in production. I had to learn to write code that executed as fast as possible and walk the DOM in order to deface the site in various creative ways and measure the business result.
In late summer 2012 an Army unit in Los Angeles pulled me in for a second Afghanistan deployment. When I came back to Travelocity in early fall 2013 the company had largely imploded from about 3200 employees to about 1100 and people were walking out the door every week.
Fearing for the security of Travelocity as a business I left out in late winter 2014 for a contract position with tax giant Intuit. The same week I joined that major department of the company announced an internal reorganization and that most of the jobs would be relocated to California and contracts would not renew. I rode this out until July knowing I would be terminated about that time.
Fortunately, Design firm Razorfish found me. The regional office is located in Austin, TX about 3 hours south, but their largest client, Southwest Airlines, was located in my area. This large client was deeply troubled putting the contract with Razorfish in grave danger. Razorfish hired me to be their first fully remote employee working on site at their client location. My job was to report my observations of internal business operations back to Razorfish and guide Southwest Airlines to achieve WCAG 2.0 AA accessibility compliance across the board for their web-based products. I even wrote the prototype for their e-commerce search results page. After about 9 months both parties achieved what they wanted in that Southwest Airlines was on the proper trajectory to execute their website and Razorfish achieved clarity in how to manuever negotiations with Southwest, which means my job was no longer necessary. Razorfish offered to relocate me to Austin.
At that moment former colleagues from Travelocity reached out to me about a position with Orbitz. In my time away Travelocity was sliced apart and sold. The retail business and Travelocity brand were sold to Expedia while the more profitable partner business was sold to Orbitz. I joined Orbitz to supply additional front end labor to help ship their largest client, Bank of America. After about a year Expedia purchased Orbitz and I became an Expedia employee.
After asking me to interview 3 times I finally agreed to interview with Bank of American. Out of 79 people interviewed that day I was their top pick out of 9 people selected and only senior developer selected. I joined on a contract to hire and after 3 months converted to an employee. Hand down Bank of America is by far the best people organization I have ever worked for. They do a phenomenal job of taking care of employees. They are also the largest organization I have ever worked for. Before Bank of America my largest employer was Expedia with 18,000 employees. Bank of American had around 270,000 employees in 2017.
From a software technology perspective things ran a bit slow at the Bank. At this point a pattern began to emerge that the larger the employer the less standardization internally existed on which software technologies teams could use to write software. Counter-intuitively the opposite has absolutely proven true in that the smaller the employer the more controlled and regulated all aspects of writing software become. On one hand this internal freedom to explore different languages and frameworks is incredibly liberating but it can also become entirely chaotic as quality standards become less universally defined.
With COVID era work from home mandates lifting I began looking for remote employment. This seemed like a good idea at the time, because my home town is the fastest growing large city in the US by a staggering amount, which means tremendous road traffic on top of a 55 mile commute. I also wanted to take the opportunity to look for a smaller employer where I could deliver fast. This proved to be a colossal mistake where the technical approach to the work is micro-managed, defensively guarded, and littered with tech debt. When that is coupled with loose code organization and no test automation any refactoring effort becomes extremely high risk. I was eventually laid off, which is financially stressful, but the work was becoming excessively stressful anyways so the emotional impact was negated.
Welcome! Please enjoy the personal website of web developer Austin Cheney. I am a full stack web developer and Army veteran looking for work. Check out my resume.
Check out my resume, or use the main menu in the top left corner of this page to see other projects and features. The code for this website is largely extracted from my personal project Share File Systems.
Written 15 August 2023
Anything that can be written in JavaScript can be written with only 4 ingredients: data structures, primitives, functions, and APIs. The problem with writing code then is not the language or an instance of code so much as the assumptions a developer places upon that code.
Many developers, often due to poor education and insufficient experience, attempt to inject conventions and syntax into their code they don't need. For example developers coming from a Java or C++ background and without formal training in JavaScript will attempt to write Java or C++ conventions into JavaScript, because they lack the necessary guidance to know otherwise. This results in dramatically increased code size, increased tech debt, poorer performance, and greater reliance upon external layers of abstraction that limit freedom of expression.
There are two key actions that eliminates more than 90% of unnecessary complexity in JavaScript:
The keyword this introduces tremendous complexity and often does not do what most developers believe it actually does. The only scope mechanism in JavaScript is lexical scope. In the case of arrow functions this is lexically scoped, which means it always resolves to the containing function. In all other cases this refers to the prior step in the call stack, which for methods is the object on which the method function is attached, or in the case of other functions refers to that which called the function.
A great many exceptionally poor decisions in the design of applications results directly from confusion reasoning about the conventions in place. There exists only confusion when attempting to determine the value of this, because in any instance of code the value of this is never explicit. It will always require research or guessing.
So, don't use this. The language does not force that convention upon you, so do not voluntarily punish yourself with it. Never is there a good reason to do so. Despite all evidence and logic many developers will continue to reinforce poor practices due to the behaviors of conservatism and anchoring bias.
If you suffer from conservatism or anchoring bias please read my thoughts on OOP where I full describe the technology and economic problems OOP no longer solves.
I have been writing JavaScript full time for more than 15 years as I write this. During that time I have found many developers apply systems of organization in even the most simple and immediate instances. That inability results in reliance upon external things to provide that organization on their behalf. These external things then become a crutch and that crutch becomes the limits of their capabilities, which means when a problem occurs that cannot be solved by the crutch the problem will remain unsolved. If that problem contributes to tech debt it will become more expensive over time, which should be expected because in business terms application code is always a cost center.
I use the following scenario to describe this to non-developers:
Imagine parenting a child around 7 to 9 years old. That child possesses the maturity to form complex thoughts and reason through various challenges, but lacks the maturity to plan and apply critical decisions even in the absence of any risk. You ask that child to clean their room and the child panics because a colossal mess stands before them and they don't know where to start or how to proceed. You remind the child they already know what to do: put dirty clothes in the hamper, pick up trash, make their bed, pick up toys, and finally sweep the floor. The child knows of those steps in isolation and has completed them several times in the past, but cannot put them together in a meaningful way.
The behavior in that scenario applies equally to many adults. Cognitive planning and forming systems of organization comes more gracefully to some people than others. Psychology explains this behavior of as a form of conscientiousness. In layman terms the common anecdote describing the behavior goes something like: cannot see the forest for the trees.
Strangely, of the big 5 personality index conscientiousness is the least correlated with intelligence by a ratio of as much as -0.27. That means intelligence alone remains largely unreliable to solve for better organizational capacity and without sufficient practice many highly intelligent people will never perform well at this. Anybody can build better organizational capacity but doing so requires practice, repetition, and experimentation no differently than improving any other cognitive skill.
After mitigating away unnecessary complexity the next step on the journey to simple JavaScript requires an understanding of the environment you are writing for. The compile target of the web browser is the DOM, and for Node.js it is Node's standard API. Those inescapable facts will alter how a developer perceives the code they write once those facts become accepted by the developer on an emotional level. Emotional connections to the work form the key to attaining automaticity, which describes how the brain learns to complete large series of low level tasks without cognitive effort. Muscle memory demonstrates one example of automaticity. Achieving the apex of skill mastery, unconscious competence, only occurs through automaticity.
Since many developers get lost in the insanity of unnecessary complexity they become utterly incapable of accepting the nature of their environment. Developers demonstrate the result of that lost acceptance through various forms of emotional trauma such as insecurity, apprehension, blame/diversion, contempt, stonewalling, and so forth. People frequently use the term toxic to describe such negativity because it harms everything like a cancer slowly creeping across the body. Psychologist John Gottman describes that emotional negativity as the ultimate predictor of failure in human relationships, but such failure applies with equivalence to all things.
Together honesty and intimacy comprise positivity, the opposite of negativity. Honesty describes the capacity for truth, which is yet another cognitive skill that requires continual deliberate practice to improve. Demonstrations of superior honesty result in advanced moral character that builds trust and inspires confidence, which form the emotional bedrock of leadership according to the US Army. When a person advances their understanding of honesty sufficiently they will alter how they perceive themselves, the world outside themselves, and how those qualities interact. With respect to authoring code, or any other activity, advanced honesty results in greater abilities to question prior conceived notions, which then allows consideration of possibilities not open to consideration before.
Intimacy is the connection, or link, between points that comprise a relationship such that the primary focus of the link is to maximize positive outcome and mitigate away negativity. The word intimacy most frequently describes social relationships and even then frequently describes relationships between adults that yields consensual sex, but the word can equally apply to any data points that form a positivity focused connection. In a purely biological sense the common use of intimacy makes sense because pair bonding exists for security and reproduction. In regards to any activity, such as code authorship, greater intimacy allows for considerations of shorter paths between problems and solutions while simultaneously building greater interests in finding and reinforcing improved directness.
I was recently watching a video about a divorce attorney and how he perceived marriage failure after speaking about it with numerous clients. All, I mean every single one, of the behaviors he describes about marriage failure equally apply to how people perceive any activity where they invest sufficient time to form some kind of emotional connection. Becoming good at programming requires an absurd amount of time. The frustrations always came down to lost intimacy that over time eventually resulted in broken relationships on an emotional level well before the legal contract of marriage came into dispute.
Both positive and negative emotions deeply effect quality of output upon a thing, whether that thing is a physical product for sale or a pair bonded human relationship. Since all humans express both positive and negative emotions and since both positive and negative emotions directly effect the quality of relationships between humans those emotions spread like a raging virus between people. That explains why a deliberate understanding of applied positivity forms the emotional bedrock of leadership, because it allows the person forming that positivity to deliberately influence people around them.
Once knowledge of the environment becomes familiar enough to build an emotional bond a new cognitive enemy takes stage: bias. Bias exists to reinforce deeply held assertions as necessary to free us from the emotional pain associated with the loss of emotional investment qualifying those assertions. Such assertions may include saving face or loss aversion. The underlying behaviors triggering bias are almost always non-cognitive emotional responses to a stress condition and are frequently associated with social conditioning. All humans demonstrate bias in various levels and forms.
To observe bias in other people watch their repetition of decisions looking primarily for motivation and evidence. For example I once had a supervisor attempt to convince me that JavaScript executes slowly and that slowness occurs because JavaScript is a single-threaded language.
First, statements such as slow suggest a comparison where one thing is less fast than something else where that something else may not be stated. In this case the something else was logic executing in a SQL database application, but was in fact not stated.
Secondly, in order for one thing to execute less fast than something else there must be measures of both things executing in similar conditions responding to similar inputs and generating similar outputs. In this case no such measures existed.
Finally, the statement that JavaScript is single threaded is true, but its incomplete and that incompleteness suggests outcomes that do not exist. An offered incomplete qualifier demonstrates one or more logical fallacies depending upon motive and deliberation. The more complete technical answer is that JavaScript is single threaded and multi-call stack where one CPU thread sufficiently executes multiple tasks in sequence without halting with each task occupying memory in isolation. This multi-call stack nature allows the CPU thread to continue executing a task until it must wait on any external input/output at which point it moves to the next call stack and executes that task until either completion or a wait scenario is encountered there. This rotation between call stacks is the event loop. I explained this to the supervisor who claimed to have heard of the subject but they did not seem to understand it and thus refused to revise their beliefs accordingly.
In summary JavaScript may or may not be slow, but such statements demand comparative measures as qualifiers. The absence of evidence resulted in the total reliance upon a prejudiced belief system, which in this case was never qualified and later proved both insignificant and invalid.
Conduct measures with a goal to challenge deeply held opinions. The exercise of measuring results in more durable decisions less prone to failure and more open to pivots as new evidence and conditions arise.
In order to write clean JavaScript you must simply follow these few steps:
Rarely, as in almost never, do I see developers attain mastery of this language, but there are developers that do.
Mastery occurs not necessarily because of talent, but primarily because of refinement.
I also hope these steps may serve more broadly than writing JavaScript or even just programming, because the behaviors and results remain the same irrespective of the task at hand.
I entered the military and attended US Army 'basic training' during the summer of 1997 before I graduated high school as a private E1. I started at the complete bottom. My job code at that time was 74B, later redesignated to 25B, which means information systems technician/analyst under the US Army Signal Corps. My first assigned unit was the 145th Medical Logistics Battalion located in Seagoville, TX.
In late summer 2001 I was deleted from the unit's METL (Mission Essential Task List), which is the corporate equivalent of being disbanded from a team due to funding restrictions without being terminated like a lay off. Fortunately, a new cyber defense unit sprang into life was eagerly sucking in signal soldiers such as myself. I transferred to that cyber unit then called Southwest Information Operations Center (SWIOC) and now named the SouthWest Cyber Protection Center in San Antonio, TX and began learning the art of information security defense. I was a Sergeant (E5) at that time.
The 335th, a 2-star command, requested 50 soldiers from across the US Army Reserves to fill a deployment and I was involuntarily selected. This was hard as it was my first time on a military deployment, my first child was only 4 months old, and I had never been away from family for more than a year before. At this time I was an extremely fresh Staff Sergeant (E6).
Our group of 50 was called 'det. 8', for the 8th manning detachment supplemented to the 335th's mission in CENTCOM at Camp Doha, Kuwait. All the older, smarter, more educated people were selected to fill the engineering and analytical positions. As a college student with no professional experience I was chosen to become a leader in the dungeon, or operations. I call it a dungeon because its the most visible cell in the unit and if you get really good at you they won't let you escape.
In order to avoid the insanity of senior officer briefings I chose to work the night shift where I became the technical supervisor. I was hoping the night shift would be boring, but it wasn't. The mission for the 335th was to support and maintain all land-based communications, voice and digital, for all of Iraq, Kuwait, and Afghanistan during two separate wars. We were supporting a network of around 300,000 users and this was my first time in management.
Upon completion of my first deployment I returned to my reserve unit SWIOC until the Army selected the unit for deployment in 2009. SWIOC would send staff to supplement the Regional Computer Emergency Response Team for Southwest Asia (RCERT-SWA), located in Camp Arifjan, Kuwait. This deployment also included a single two person team for each of Iraq and Afghanistan called Computer Defense Assistance Program (CDAP). I was selected to join the Afghanistan CDAP team as the junior member.
Traveling throughout Afghanistan was perhaps the most amazing thing I have ever seen. Sharper taller mountains that I have ever seen in North America with diverse natural environments and diverse people. I was living in a wide variety of conditions and it was great. My two person team primarily lived on Bagram Air Base just north of Kabul, but we traveled to many different locations. If my memory is correct we conducted 24 assistance missions during this year in Afghanistan.
Our mission in Afghanistan required that we travel to major US Army installations and provide assessments of information security posture at each location. We had no command authority, so these were friendly audits more informational to the client location and also served as aggregated research material.
After doing this security work for a year in extreme conditions and thinking about security as a manager and auditor I worked up the nerve the take the dreaded CISSP exam. This was a 250 question test with large questions. It required authorized to take the test based upon prior experience and a $700 fee, and yet it still only had a 60% pass rate. I discovered there was a test date on Camp Liberty in Iraq at the extreme end of this deployment and received travel authorization at the last minute. I paid the fee and made plans for travel. This was my first and only time in Iraq and I got lost when I landed at this incredibly large base. I walked around the base for several hours attempting to find the Iraq CDAP team, but had no luck so I finally went to bed at 0200 in the morning. I woke up in 3 hours in an attempt to reestablish communications, because I still had no idea where I was going and the test was scheduled to start at 0800. They eventually found me and everything worked out. I took the test on almost no sleep after an exhaustive day of traveling, and spent 5.5 of the 6 allowed hours of allowed time checking my answers before becoming too exhausted. 3 months later I received the my results in the mail. I passed.
In 2012 I received promotion orders to Sergeant First Class (E7) dependent upon accepting transfer order to the 300th Sustainment Brigade closer to home. Almost immediate upon arriving at the 300th I was selected for deployment with the 311th Sustainment Command based out of Los Angeles, California. I became the Noncommissioned Officer In Charge (NCOIC) of Knowledge Management for the command.
The 311th arrived at their deployment home of Kandahar Air Field, Afghanistan for a 9 month deployment. Southern Afghanistan is far different than the north. It's hot, humid, and dusty. Kandahar Air Field also had poo pond which is an open air cesspool about an acre large that stunk up the entire base. Kandahar was as miserable as Bagram was stunning, and unlike my prior Afghanistan deployment I did not travel.
The mission of a Knowledge Management team requires promotion of transparency and sharing between teams of a unit as necessary to build better products and increase internal automation. Our team accomplished this through the product vision of my supervisor, the support of our commanding general, and my training visits to all 25 teams comprising the unit. These teams include staff sections, special staff teams, and commodities of Support Operations (SPO) branch. Our team invested heavily in the creation of a new SharePoint product and staff training.
Months after returning from Afghanistan I began the administrative process of a Warrant Officer selection packet. In order to ascend to a Warrant Officer a board of officers reviews packets of hopeful candidates to determine eligibility to attend a candidate course. Due to strong annual evaluations, prior leadership, completion of my Bachelor's degree, and years of contributions to open source software my packet was strongly approved in early 2016. I attended the Warrant Officer Candidate School (WOCS) in May, which is like military basic training all over again to filter out older soldiers without the proper leadership attitude and flexibility. Upon completion of WOCS ascension to officer was complete and I achieved promotion to WO1 (Warrant Officer 1). Over the next two years I completed my 9 months of career training at Warrant Officer Basic Course (WOBC) for MOS 255A.
9 days after I graduated from WOBC my organically assigned unit, 300th Sustainment Brigade, was scheduled to deploy. I relished the one week of time I had at home before leaving my family for an extended period again. After completion of required pre-deployment readiness the unit arrived at Camp Arifjan, Kuwait in October 2018. Shortly after arriving I received a promotion to Chief Warrant 2 (CW2).
The unit's mission was to supervise two sustainment battalions and report mission assessments to our supervising unit, 1st Theater Sustainment Command. I occupied the position of 'Chief of Network Operations' and reported directly to the brigade S6, which is the staff officer in charge of communications management for the brigade. My responsibilities included maintaining and extended communication infrastructure in our brigade compound as well as building out relationships with external units to ensure continuity of network service support and acquisition of communication materials as necessary.
Only a few weeks after arriving to Kuwait we experienced catastrophic flooding. The first flood occurred on 4 November 2019 and was the most extreme flood in Kuwait's 300 year history resulting in widespread property damage. On 14 November 2019 another flood occurred three times greater than the prior record setting flood. This flood was substantial enough to float large concrete barriers down the road and cause severe infrastructure damage to the base. There are YouTube videos showing the devastation this flood caused for Kuwait City.
This deployment was amazing because all travel restrictions within Kuwait were lifted so that the only restrictions came of unit commanders regarding their assigned soldiers. I found myself traveling to the nearby Kuwaiti Naval Base, across the country to Camp Buehring, and on several occasions sight-seeing downtown. Kuwait has the largest and most impressive shopping mall, the Avenues, I have seen in my life. I enjoyed the civilian food as among the best food I have ever eaten.
In January 2020 two catastrophic things happened: COVID-19 and a return to Bank of America's internal team Contact Center Voice Technologies (CCVT) that featured a workaholic micro-manager for a team lead. Fearing what was about to happen with COVID I sought out a deployment with the 311th and through a personal contact was invited to participate. Unfortunately, it took them months to onboard me, so I wouldn't get to the pre-mobilization center April and finally deploy in June.
I deployed to the unit's location at Camp Arifjan, Kuwait. At this point I had spent so much time at Camp Arifjan I was starting to feel like a mayor. COVID lock downs were in full effect.
Upon arriving to the unit they immediately appointed me supervisor of the help desk. I had a staff of 16 soldiers including 4 subordinate leaders. I wrote a new trouble ticket application in SharePoint and eventually reduced our number of open tickets from 253 to 0. This was my most challenging deployment. There existed widespread physical infrastructure damage throughout post due to the floods of late 2019. The post network team normally comprised 60 civilian network engineers, but due to COVID all but 2 were removed from base and told to shelter in place. Our team had to contend and work through these issues beyond our control. We could not leave post and there was in-fighting in our section from leaders in greater positions of authority than myself. This deployment stressful. As an outlet I programmed at night and formulated a new approach to distributed test automation in web browsers.
Written 18 August 2023.
It seems many people advocate for object oriented programming (OOP) without knowing what it is or what problem it solves.
The language Simula 67 invented the concept of OOP in 1967 by a team of Norwegian engineers. The prior popular programming language paradigm, called procedural, expressed instructions into groups called procedures. Each procedure wrote to memory and access to that memory, called procedure calls, created the means of an expression that allowed instruction reuse per instruction group. OOP achieved a more thorough memory reuse scheme through a group of instructions, called an object. A developer then clones and extends the objects as necessary through a process called poly-instantiation. Poly-instantiation allows for the cloning of objects, called inheritance, but the cloned object shares the same memory as the object from which it was cloned.
The only purpose of these programming paradigms, procedural and OOP, involved code reuse versus memory conservation. Looking at the historic cost of memory 1mb costed about $734,000 in 1967 when Simula 67 was invented.
In 1979 a Danish academic started work on C language with classes. This did not work and so he started over with a new language named C++ that he published with his PhD dissertation in 1982. The language became quite popular and for a while became the de facto programming language of both university computer science departments and corporate software production, thus becoming the normative language of institutional education. Even by 1985 the cost of memory was still around $880 per 1mb.
In 1995 Sun Microsystems released the Java. Java inherited many ideas and idiomatic patterns/styles directly from C++. One critical distinction between Java and C++ centers around memory management. C++ uses memory pointers to set and free memory in a manner very similar to C language, but Java uses a more automated process called garbage collection. With garbage collection memory allocation occurs automatically as an application executes by lower level code interpreting the given language and that lower level interpreter frees the memory when the application no longer needs it. Later in 1995 another garbage collected language named JavaScript was invented. The cost of memory was down to about $31 per 1mb.
Java was invented to solve several business problems. One of these problems included the concept of write once, run anywhere, which describes a single programming language environment that executes in an identical way on different operating systems. Another business solution included making use of the conventions popularized, and later institutionalized, by C++ but with less learning effort and greater access to extensions by only package distribution channels.
The design decision to leverage the popularity of C++ also illustrates the single greatest difference between Java and JavaScript. JavaScript is inherently an OOP language due to its internal definitions and type system, however the JavaScript language takes no position on stylistic conventions of developers. This allows JavaScript to achieve a designation named multi-paradigm, while Java maintains a single paradigm style imposing a single expressive C++ like system for writing application code. At 1995 the cost of memory became cheap enough that developers were free to deviate from restrictive memory conserving conventions. At ths time of this writing memory costs about $0.0015 per 1mb, or $15 per 1gb.
OOP was created to allow a greater degree of programming expressiveness with a greater conservation of memory. In modern programming the technical purpose of OOP, memory conservation, is absent from most high level languages and replaced by automated conventions. The high costs of memory have also largely disappeared. If UltraRAM becomes a commercial reality computers will cease to have memory all together.
The reason OOP remains popular is due solely to institutional social conventions since both the economic and technical motives are long gone. OOP is still the primary means of computer science education in universities. Most of the modern legacy applications in commercial use within the last 40 years make use of OOP.
The popularity of OOP largely exists as a result of a broken feedback loop. Universities continue to teach OOP because industry says that is what they want. Industry claims to want OOP because that is what the universities teach and training developers cost too much.
Many people will argument that universities primary teach OOP because industry has OOP applications that need to be maintained, but this is faulty logic. My university continues to teach COBOL, a language from 1955, because there are still companies that execute COBOL applications. That is true, but COBOL is not the primary language of education of any computer science program even where it is taught. The primary reason universities principally teach OOP is because that is the language concept most familiar to the university educators, a deeply institutional environment.
In industry then there are only two reasons to continue the practice of writing OOP style code. First, is to maintain legacy applications. Second, is for social compatibility with older developers who only know OOP programming paradigm.
It appears the future of programming is functional programming paradigm, according to John Carmack. In difference to the Wikipedia link I provide for function programming it is frequently a declarative programming style, but not inherently so and may be exceedingly imperative. For example a program that makes explicit use of event handling in the control flow of its logic and is designed solely around that consideration would be extremely imperative.
Functions are yet another form of instruction grouping like procedures and objects. Functions and procedures differ from objects in that they both execute instructions directly opposed to just grouping instructions into a common point of reference. Unlike procedures functions may optionally receive input and always return output. SQL databases illustrate the distinction between functions and procedures because they allow both the concept of stored procedures and functions as separate types of artifacts.
Some people consider functional programming superior to OOP because the concept of poly-instantiation is wildly complex where object inheritance results in many vaguely similar artifacts. Functional programming also achieves a substantial reduction in code volume because the code execution point and grouping mechanism are one in the same. Also there is less need for syntactical decoration in functional programming where variable value assignment is explicit opposed to the implicit nature of a pronoun-like reference referring to artifacts in the call stack or scope chain for value assignment.
In JavaScript don't use OOP conventions. OOP conventions greatly increase the code size, which increases the time to maintenance (also known as tech debt), and also increases the complexity of both organizational conventions and syntax. More explicitly never use the keyword this or anything that either makes use of or is reliant upon keyword this.
In JavaScript OOP is far more challenging than in other languages. This is because in modern times OOP only exists due to social institutions which teach a C++ class based inheritance scheme. In JavaScript the inheritance mechanism is quite different using a concept called prototypes, which come from the language Scheme. Also the keyword this is procedural, which refers to the prior step in the call stack. In functions that means this refers to that which called the function, which could be a different function. In methods this refers to that object on which the method is attached. In arrow functions, however, this is lexical, as opposed to procedural, so it always refers to the function containing the given arrow function.
Many developers commonly get OOP wrong in JavaScript, and since the language is multi-paradigm just do use OOP, particularly this. Since the value of this is never clear or predictable by developers just reading the code many developers will apply unnecessary syntax decoration further increasing code system and reducing both clarity and predictability. So, just don't do OOP in the language. I promise, you will thank me later.
The only two times use of OOP cannot be avoided in JavaScript is to extend OOP code you didn't write or to extend internal features of the language directly.
The only two times use of OOP cannot be avoided in JavaScript is to extend OOP code you didn't write or to extend internal features of the language directly. As a thought exercise ask yourself: "Self, what is the most value quality of OOP?". When you ask yourself that question out loud so you can hear it in your voice typically the place your mind will go to is searching through all things you know of programming. When you ask it of another person in casual conversation the answer is almost always and almost exclusively something related to attaining employment. The distinction is both striking and interesting. The most interesting part of this is that people want to validate this investment of time. They spend the time and cognitive energy attempting to qualify this consideration so long as that time is available, but in conversation where timing is condensed away the creativity they immediately go to a more honest place.
A Node.js project that is on the verge of becoming a distributed operating system. The code for this web site is crudely ripped from the complied code for Share File Systems. I originally wrote this application because I wanted secure private access to my home computer while I am living in Asia. I don't want some third party server having access to my file system. Its my data on my computer, so I wanted to solution to access it over a network.
I started this tool to combine a code beautifier with a code comparison tool so that differences become visible irrespective of code minimization. I stopped working on this application 4 years ago and yet its NPM page is still downloaded about 6000 times a week. This application once achieved 534,000 downloads in a single day.
The Sparser application is a multi-language parser spun out of the Pretty Diff application.
I originally wrote this for Southwest Airlines where it became immediately used when they were eagerly attempting to achieve universal WCAG 2.0 AA compliance. There were numerous color contrast tools available at the time, but I didn't see anything that compared the entirety of a large color scheme.
A tool to examine the quality of HTML text description. Simply copy the code and paste into your browser console to see an immediate report.
A simple browser based slide show application I wrote.
I preserved screenshots for some of my work in my first professional job designing creative branding for high end hotels and resorts and getting it work in email. These screenshots come are taken from MS Outlook prior to MS Office 2007.
Written 21 April 2021
State management is the means by which settings and configurations of an application are stored and applied. It is not associated with the collection or storage of human consumable content or contributions.
There are only 4 parts to state management:
A state artifact is one or more data structures that retain an application's configuration. This is certainly the single most important consideration of state management. My preference is to use a single object that stores a collection of similar items where an item is a uniquely separated area of content or functionality as determined by your application. Contrary to many frameworks I strongly suggest separating the state artifact from the components represented by that state information.
Collection is the means by which the state artifact is updated. My preference is to write to the state artifact directly for a given piece of functionality, but many application frameworks will provide an API to solve this issue because they do not provide a single state artifact. I strongly suggest state updates via abstraction only as necessary to eliminate repetition in your code.
Storage is the means of writing the state artifact to a file. In the case of web technologies you have several different options: Cookies, local storage, IndexedDB, the file system, or a server side application. I strongly recommend not using cookies, because they are limited in size to 4kb per domain and are the slowest option available. My personal preference is to write state to the local file system if you are running local services, because this allows computer wide availability of the state data without a performance penalty. If a local service is not running on the given computer I recommend storing state data in either local storage or IndexedDB because they allow for a large volume of data storage without penalty of waiting on the network or a distant server.
Recovery is the means of reading state artifacts from storage and applying this data to the application. A well written application only needs to apply state recovery on start up or page load. Many web frameworks will either ignore state recovery on page load or apply it every time a component loads or becomes available, which are both poor practices.
Share File Systems is a peer-to-peer file system application I wrote in TypeScript with a full Windows-like GUI. In this application the various windows are internally referred to as modals. Each modal has a uniform data structure even though their content and functionality may differ wildly. I use a TypeScript interface named ui_data as my single storage artifact and the modals are defined as interface modal:
// https://github.com/prettydiff/share-file-systems/blob/06e7ae50e753e7f54867fdb571ab384be7815ab7/lib/typescript/environment.d.ts#L22-L38
interface ui_data {
audio: boolean;
brotli: brotli; // 0-13
color: colorScheme; // a choice of string names
colors: colors; // a data structure of two colors applied to allowed agents
hashDevice: string;
hashType: hash; // a choice of string values for user's choice of hash functions
hashUser: string;
modals: {
[key:string]: modal;
};
modalTypes: modalType[]; // a set of strings indicating the types of modals current opened to the user
nameDevice: string;
nameUser: string;
storage: string;
zIndex: number;
}
// https://github.com/prettydiff/share-file-systems/blob/06e7ae50e753e7f54867fdb571ab384be7815ab7/lib/typescript/browser.d.ts#L149-L182
interface modal {
agent: string;
agentType: agentType;
callback?: () => void;
content: Element;
focus?: Element;
height?: number;
history?: string[];
id?: string;
inputs?: ui_input[];
left?: number;
move?: boolean;
read_only: boolean;
resize?: boolean;
scroll?: boolean;
search?: [string, string];
selection?: {
[key:string]: string;
};
share?: string;
single?: boolean;
status?: modalStatus;
status_bar?: boolean;
status_text?: string;
text_event?: EventHandlerNonNull;
text_placeholder?: string;
text_value?: string;
timer?: number;
title: string;
top?: number;
type: modalType;
width?: number;
zIndex?: number;
}
That is the entirety of the state artifact for that application.
The state data is updated on each user interaction by directly writing to that object. Here are some examples:
In those examples state changes are assigned to the state artifact directly and then stored using the network.settings method. This method issues an XHR to a local service with the state artifact which is then written to a file named configuration.json. This occurs frequently. The slowest part of this process is actually writing files to disk. The application addresses this by first writing to a randomly named file and then renaming that randomly named file to the correct configuration.json thereby overwriting the older data. It is faster to rename a file than to write a new file, so this effort is used to prevent conflicts and race conditions that may arise from writing to disk too frequently. This occurs in the application at: https://github.com/prettydiff/share-file-systems/blob/06e7ae50e753e7f54867fdb571ab384be7815ab7/lib/terminal/server/settings.ts
The recovery process involves multiple steps. When the page is requested from the local service the configuration.json file is read from disk as well as the page's HTML file and other stored data. The contents of the configuration.json file, and some other data, are embedded into an HTML comment and injected into the bottom of the page HTML. See: https://github.com/prettydiff/share-file-systems/blob/06e7ae50e753e7f54867fdb571ab384be7815ab7/lib/terminal/server/methodGET.ts#L75-L99.
The local service responds with the HTML. A JavaScript file executes automatically as the page is parsed into the browser and this JavaScript walks the DOM to find that comment and extracts the embedded data, which is then assigned to the state artifact data object over the default data structure. See: https://github.com/prettydiff/share-file-systems/blob/06e7ae50e753e7f54867fdb571ab384be7815ab7/lib/browser/localhost.ts#L232-L241
Using the state data the JavaScript recreates each user interface artifact order to the parameters provided in the state artifact. See: https://github.com/prettydiff/share-file-systems/blob/06e7ae50e753e7f54867fdb571ab384be7815ab7/lib/browser/localhost.ts#L471-L490
State management is trivial easy. Don't over think this. This is something any developer can easily perform without a framework.
This was originally written many years ago for my Pretty Diff personal website and the date is lost but I believe I wrote this in 2015.
This guide provides some basic introduction to terminology and techniques to describe parsing applications and then follows up with a simple step by step guide to write a parsing application. The scope of this guide is limited to formal techniques to interpret computer languages and datasets. It does not cover information related to compilation processes or tasks related to application execution.
A parsing application merely provides the means of taking some input and transforming it into works that computers can understand. The ability to write parsers is to programming as the ability to write is to literacy, which is to say that learning to write a simple parsing application opens new worlds of instant capabilities. Some quick examples are a search engine spider, quickly scraping relevant data out of a spreadsheet, analyzing financial data, or quickly making sense of human language. The ability to understand and write parsing applications will instantly transform any mediocre programmer into a near rockstar.
The concepts, opinions, and theories expressed in this guide are universal to programming. The methods and approaches are expressed according to JavaScript language.
{
"body" : [
{
"declarations": [
{
"id" : {
"name": "answer",
"type": "Identifier"
},
"init": {
"left" : {
"raw" : "6",
"type" : "Literal",
"value": 6
},
"operator": "*",
"right" : {
"raw" : "7",
"type" : "Literal",
"value": 7
},
"type" : "BinaryExpression"
},
"type": "VariableDeclarator"
}
],
"kind" : "var",
"type" : "VariableDeclaration"
}
],
"sourceType": "script",
"type" : "Program"
}
index | attrs | begin | daddy | jscom | linen | lines | presv | token | types |
---|---|---|---|---|---|---|---|---|---|
0 | [] | -1 | "root" | false | 1 | 0 | false | <a> | "start" |
1 | [] | 0 | "a" | false | 1 | 0 | false | <b> | "start" |
2 | [] | 1 | "b" | false | 2 | 0 | false | sample text | "content" |
3 | [] | 1 | "b" | false | 3 | 0 | false | </b> | "end" |
4 | [] | 0 | "a" | false | 3 | 0 | false | </a> | "end" |
In many parsing applications the lexer and parser are well separated subjects. In the common approach a lexer scans an input and produces tokens for the parser to analyze and the parser produces an AST for output. When the applications are well optimized they can execute more than twice as fast as the Pretty Diff approach, but there are limitations to this approach.
The parsers used in the Pretty Diff application combine the lexer and parser into a single operation. The lexer scans the input looking for syntax and once a token is identified it is immediately parsed before the lexer progresses forward. This allows for advanced decisions, such as code correction and grammar analysis, to occur immediately. These advanced features make the total execution time of the parsing operation slower, which can make the Pretty Diff approach to parsing appear more than twice as slow as other comparable parsers. Despite that the Pretty Diff approach is substantially faster and more simple than attempting to apply any such advanced analysis as a separate process outside the parser.
The Pretty Diff approach produces output in the form of parallel arrays instead of an AST format. The idea is that an AST can be created from a parse table approach provided one of the categories of data is structure and placement information, but a parse table cannot be created from an AST without running another parsing operation. The parse table approach also allows for sorting and analysis by selectively targeting various areas and data types without consideration for the output as a whole.
One of the primary reasons I prefer to write in JavaScript is because lambda expressions are a native quality not hidden behind a convention. To get started I prefer to write a function against a single global reference that contains everything I need.
var myParser = function (options) {
var token = [],
types = [],
parse = function () {};
};
In the above code sample we can see a single global variable, myParser, that contains some declared variables. The references token and types will store data while parse is a child function to store all the lexer/parser relevant instructions. This will allow availability to the token and type data outside the parse function so that we can maintain separation of concerns and data availability without having to pass things around. Now let's jump into the parse child function where we are going to write a simple lexer/parser.
When writing a lexer I prefer to be as explicit as possible. The more explicit the code the lower the risk of unexpected results. This means the code will follow an imperative coding style. First, let's convert the input into a string, if it isn't a string already, and then into an array that we can loop through.
parse = function () {
var data = options.input
.input
.toString()
.split(""),
len = data.length,
a = 0;
for (a = 0; a < len; a += 1) {}
};
Converting the input into an array is not required, but it makes things much easier and faster to manipulate. Looping through a binary buffer, for instance, can be more challenging to think through as humans don't read binary or hexadecimal as fast as they read code and strings. Arrays are substantially faster and more expressive to evaluate than large strings.
Using a for to iterate through an array is now considered an anti-pattern in JavaScript since the ECMAScript5 version of the language provides a foreach method. I deliberately choose to use a for loop because there are times where it is necessary to jump around to various indexes or iterate across the input differently.
Now that we have the basic lexer written let's evaluate some syntax. We are going to evaluate a C language styled block comment. A block comment begins /* and ends with */. To impose additional separation of concerns we will put this rule behind another child function. First, lets create a rule and the child function:
parse = function () {
var data = options.input
.input
.toString()
.split(""),
len = data.length,
a = 0,
commentBlock = function () {};
for (a = 0; a < len; a += 1) {
if (data[a] === "/" && data[a + 1] === "*") {
commentBlock();
}
}
};
Now lets define the lexical analysis for a block comment:
commentBlock = function () {
var comment = [],
b = 0;
for (b = 0; b < len; b += 1) {
comment.push(data[b]);
if (data[b] === "/" && data[b - 1] === "*") {
break;
}
}
a = b;
token.push(comment.join(""));
types.push("comment-block");
};
A couple of things happened. We created a new reference b as a separate iterator. In this case the secondary iterator isn't needed and is only present as an example that additional iterators can be used. Additional iterators allow the freedom to traverse the data independently from the primary iterator a. If you choose to create an additional iterator be sure to reassign the value of the primary iterator before exiting the current function to avoid duplicated effort.
At the end of this function we push a token, the entire block comment, and a type. In this case we know the data type by knowing the syntax. This isn't always the case. We won't know if a word is a language keyword or a user defined reference without some additional effort. It is my opinion overall efficiency increases by supplying these additional evaluations directly into the parser. The additional tasks will always make the parser slower, which is wasted effort if it isn't needed. Unfortunately, it is extremely difficult to know what is or isn't needed at parse time and performing these additional evaluations later, after the parsing completes, is far more expensive still.
Here is what the combined code looks like:
var myParser = function (options) {
var token = [],
types = [],
parse = function () {
var data = options
.options
.input
.toString()
.split(""),
len = data.length,
a = 0,
commentBlock = function () {
var comment = [],
b = 0;
for (b = 0; b < len; b += 1) {
comment.push(data[b]);
if (data[b] === "/" && data[b - 1] === "*") {
break;
}
}
a = b;
token.push(comment.join(""));
types.push("comment-block");
};
for (a = 0; a < len; a += 1) {
if (data[a] === "/" && data[a + 1] === "*") {
commentBlock();
}
}
};
parse();
return {token: token, types: types};
};
In addition to describing tokens by data type it is also frequently necessary to describe them by the current structure in the application code. In markup based languages there are parent elements and child elements, so a given element's structure can be described by a parent element's tag name. In many programming languages this distinction is less clear.
For languages that use a C language based syntax an easy way to think about is in terms of matching start and end delimiters comprised of (, {, [, < and ), }, ], > respectively. The C language syntax family comprises JavaScript, Java, C#, TypeScript, and many other languages. A structure is defined by the delimiter character and context rules in a given language. For instance the [ and ] typically describe a list or array structure, but in C# it can also describe a data type attribute given the context [, word type, optional white space, and : (a colon). The easiest way to think about it is that you always know where you are in parsing the code, what has already been parsed, and some idea of what is coming. Always keeping that simple rule in mind helps prevent elaborate conventions from distracting the simple ideas of parsing structures with precision without guessing or rework.
Some structures allow context rules and conventions not permitted in other various structures. It is helpful to describe structures during the initial lexing and parsing process to allow decisions later without introducing an additional parsing step. An example is that it may be desirable to identify references, values, and their assignment with distinction which can vary in syntax by the containing structure even within a single language.
Pretty Diff uses two arrays to define structures in its parsers. This keeps the approach to structure consistent between the different parsers and allows descriptions of hierarchy without need for an AST. The Pretty Diff parsers use arrays named depth and daddy (in the case of markup) to describe the name of the current structure and an array named begin that defines the token index where this structure started. Common values, in C based syntax languages, for the depth array include but are not limited to: array, function, method, class, map, object, for, if, switch. In the markup parser the depth array values take the tag name of the parent element. From these arrays an AST or a DOM tree could be formed without additional parsing considerations.
Once the parser completes the parallel arrays will be populated with data. It is important that these parallel arrays always have the same number of indexes and each index describes the data at that index of the token array. Knowing this can save a lot of effort debugging failures later.
Since the output is merely a couple of arrays the data can be easily iterated over. The data can also be examined by specific data types through examination of the types array. Pretty Diff includes several additional parallel arrays for describing qualities such as white space, markup attributes, markup parent element, JavaScript code structure, and more. These additional arrays provide data immediately without need for additional parsing steps.
One use example is navigating data structures. The script and markup parsers both produce an array named begin. Use this data to walk up the code structures or walk up markup element hierarchy. The begin array stores the start index for a given code structure so, begin[a], will return a number less than or equal to "a" indicating the start type for the current structure where "a" is the index of any parsed token. The number at begin[begin[a] - 1] would then contain the index for the next higher structure start point. A small loop could traverse this data to create an AST or walk up the DOM.
A well written parser is extensible, which means additional supplemental rules can be added later to allow new features at minimal collision with existing features. Before a parser can be extensible it has to be sturdy and confident in its current target language and conventions. The addition of new rules, features, or even languages dramatically increases the risk of errors and malformed output by the parser. Seek out diverse code examples and integrate them into some form to test automation. Constantly seek to discover input that will break the parser or output corrupted results. Once a corrupting code sample is discovered ensure it is added to test automation as a test unit.
A general rule is to define data type names as specifically and precisely as possible in the parser, perhaps even more specifically than the current need demands. When writing a parser all the given edge cases and collisions of various features of the target language likely aren't known. Unfortunately, many of these edge cases will be discovered in production once error reports surface from users. This is unfortunately true even for the most widely used of parsers. Extreme care for precision, even when currently unnecessary, helps mitigate some of these unknown errors.
Another general rule is to perceive the code in negatives, as in what can't or shouldn't be done. Thinking about the code's decisions in a more pessimistic view is a key way to limit risk by continuously focusing on predictable operations in the face of unknown and diverse input. This form of thinking reenforces the idea that rules must be specific and not allow deviations from a desired predictable state of output. As new features are added think about them in terms of what they shouldn't do. The rules that define a new feature will likely be simple, but ensuring their presence does not open regression or complicate maintenance. With extended complexity the demands of some special care and discipline likewise extend.
Since code parsing is comprised by a series of simple and mundane tasks with the primary goals of sturdiness and performance plus a distant secondary goal of extensibility the code tends to be rather boring. Imperative coding styles appear to work best for this kind of work. The end result for the code author is all about ensuring the code is as predictable as possible and contains the fewest instructions possible. Fewer instructions means less code to maintain or test (less developer time) and fewer things for the computer to evaluate (performance).
I recently extended Pretty Diff's JavaScript parser to support languages Java, C#, and TypeScript. This extension required some additional work in the beautifier, some additional structure descriptions, limiting use of automatic semicolon insertion, support for new operators, and type annotations (type generics). The big challenges were updating the beautifier to support new code structures not present in JavaScript and parsing the type generics. For the point of this discussion I will focus only upon adding the type generics feature.
Type generics are angle brace delimited tokens similar in appearance to XML tags. Type generics may contain a reference, a comma separated list of references, and may contain a spread operator. They may also be nested so that one type generic token contains another. Unlike XML tags type generics do not contain attributes, end tags, or child nodes.
The complexity in this enhancement is ensuring it does not cause regression. The JavaScript parser already provides support for the < character as a less than operator and also as the start of an XML tag in the case of React JSX language. The Pretty Diff parsers also don't limit support of XML tags embedded within each other, which is required for the supported JSP template language and could cause additional conflict. This enhancement would allow a third and completely unrelated use for the less than character and it must be added in a way that does not break support for the existing two uses.
This enhancement became easier when I realized that, aside from a TypeScript only edge case, type generics would never exist in the same contexts as JSX's XML tags. This reduced the complexity of the effort so that I only had to tell the difference between whether the less than character represents a less than operator or the start of a type generic token.
At this point in the lexing process you know exactly where you are and you can look up the parsed tokens to see what has come before, but you don't know what comes next in the code yet. Keeping this in mind I chose to presume the token would be a type generic element opposed to a less than character until I found reason to believe otherwise. If I did find evidence to believe this could not be a type generic element I could easily return out of this logic and push a less than operator into the token array and appropriate descriptions into the other arrays because. This is especially simple since that operator is only one character. I determined inappropriate evidence would be a character reserved for operators or certain other syntax, a greater number of > characters than < characters.
The case collision I hinted at earlier between TypeScript and React JSX in whether these elements immediately follow a return keyword. In TypeScript you can return a type generic element from a function. In React JSX you can return XML tags from a function. In this case I do not have enough contextual information to tell the difference between whether this should be an XML tag or a type generic element. To solve this problem I created a new option to pass into the parser. If the language is assumed to be (or chosen by the user) as TypeScript then an option named typescript is set to true, which prevents the parsing of XML tags from JavaScript like code.
Writing parsers isn't fun and exciting like writing the next hot video game. It probably won't unlock the keys to the next major computer science breakthrough. Writing parsers will, however, make you a better programmer. Writing parsers will infuse in you a disciplined, conservative, and humble approach to programming. It is often a thankless effort hidden beneath layers of rules and utilities not visible to the end user. When you do well few people will notice. When you break a user's code many will notice and few will forgive you. Writing parsers is not the path to heroism.
I believe writing parsers has made me a better programmer in ways many other programming disciplines could not. I believe I have learned qualities and decisions that differ from many of my software developer peers because the goals are different.
Application predictability is important. Unpredictable outputs often results in harm. Helpful tooling and exciting or clever new conventions and features won't make the code more predictable. The only things that make code more predictable are fewer instructions, clearer flow control paths, separation of concerns, and increased testing. The end result is boring stuff and code put on a diet. Dieting in code is often, for many developers, an unacceptable inconvenience much like dieting in real life.
You are on your own. There might be a team you can turn to help answer questions, or an adviser to guide you through tough decisions, or there might even be a helpful toolkit to assist with testing. None of those things will solve hard problems for you. You will be forced to make decisions. Many of these decisions could mean catastrophic failure that breaks a user's code. Many times you won't have any idea. There isn't going to be a magical framework out there to make decisions for you.There is no solution in a box to write APIs for you or manage expectations. You are on your own. You have to make hard decisions and when you break a user's code, because it will happen, you have to own those consequences and seek resolution.
Simplicity is different than easiness. Easy means lower developer effort for the developer writing the code, but simplicity means less effort for everybody else to include your users and the processing computer. These terms could not be more different. The job of a competent developer is to make that distinction and solve for it as directly as possible. Nobody will forgive you when you fail. Your users will, however, recognize the additional effort to compensate for the external failures around you if that means increased application dependability or user fulfillment.
Superior programmers are simply people who iterate faster. There is no way to know if you have solved a given problem until you have tested the solution. Once the problem is solved you won't know if you have created additional problems or regression issues without running additional tests. The pattern here is to attempt a solution and then attempt to verify the solution against various validations. The key to success is not some brilliant vision or the magic ability to avoid failure. The key to success is simply speed. There is clearly more to it than that, like a diversity of reliable test cases, but in the end it always comes down to speed. The best way to think about this is that a developer who works 10 times faster than the next developer is allowed to fail 10 times more frequently before releasing code into production. The way to achieve superior speed is to access the problems as directly as possible. Each barrier between you and the problem will slow you down. Common barriers are too many tools, abstractions, build processes, compile steps, frameworks, missing documentation, unclear flow control, and so forth. Increased speed pays unforeseeable compounded dividends over time as you learn to further increase your speed with more precise forms of automation, organization, and planning.
Writing parsers is actually enjoyable and fulfilling. Over time you notice that your discipline and development capabilities increase. Your speed dramatically increases over time as each edge case can quickly consume large portions of your life and mental health. Your ability to perceive large complex problems before they occur and address them with simple elegant solutions is something only you can appreciate, but it applies to many areas of life even outside of software. Given enough practice your ability to write software architectures seems to spontaneously arise from nothing when really it is the result of solving many tiny edge cases paired with a modest amount of planning.
These are my thoughts on different things. I am writing these things hoping they are educational, but my primary motivation includes writing these things so that I don't forget them into the future.