The Pragmatic Programmer

The Cat Ate My Source Code

We need to take responsibility for ourselves and our actions. Do not place all the blame on vendor, programming languages, framework, management or colleagues. Provide solutions to the mistakes, not excuses.

To-dos:

  • Refactoring
  • Code That’s Easy to Test
  • Ruthless Testing
  • Prototypes and Post-it Notes

Refactoring (Noted)

Treat software development as gardening rather than as construction.

Refactoring is rewriting, reworking and re-architecting. Do it when there is a duplication (violate the DRY principle), non-orthogonal design, outdated code (change of requirements, better understanding) or 202203011139 opportunities (improve performance).

Treat refactoring as medical treatment and checkup. The earlier, the better. The more often, the better. Broadcast such changes to the user. If the refactoring is scheduled, let them know when will it be refactored.

To avoid making the codebase worse after refactoring (Fowler#), we should make sure that (1) we don’t try to add new functionality while refactoring, (2) we have good tests and run them frequently where we could know we’ve broken something after changes and (3) we only do localised changes which are small, e.g., moving a field from one class to another, changing the variable name and fuse two similar methods into superclass.

To-dos:

  • Orthogonality (What is this)
  • The Evils of Duplication (DRY Principle)
  • Code That’s Easy to Test
  • Ruthless Testing
  • Software Entropy

Orthogonality (Noted)

Orthogonality is a property where each component in the codebase is highly independent to each other that changes in one does affect others. This is uniform to the #Simple Responsibility Principle.

To achieve orthogonality, it is better to eliminate the effects between unrelated things and design self-contained components (independent and with single, well-defined purpose). Create module that doesn’t depend on other modules implementation and reveal unnecessary methods to them (decoupling). Avoid global data, replace it with singleton with care. Avoid duplication of the similar function.

Benefits of orthogonality: (1) productivity gain and (2) risk reduction. (1) Development and testing time reduced since changes are localised which means that we are developing in small components (easier to design, coded, unit tested) rather than in big code chunk. Such components also has great reusability as they have specific and well tested responsibilities. Plus, since they have no overlap functions, they will not waste time on doing the same job. (2) Because these components are rather isolated due to orthogonal approach, it is easier to debug since the symptoms are likely going to be contained in one area. The fixes will likely to be smooth. Furthermore, these isolated components are better tested since the tests are relatively small and simple. This can ease the dependency on particular vendor, product or platform because its interface is isolated.

If an object persistence scheme is persistent, then it’s orthogonal. If it requires you create or access objects in a special way, then it’s not. The introduction of the libraries should not require changes on the original source code. (p. 62)

Refactoring is a great practice on maintaining orthogonality of a program.

To-dos:

  • It’s Just a View (Model-View-Controller)
  • Decoupling and the Law of Demeter

The Evils of Duplication (Noted)

Programmer is constantly in maintenance mode as they need to adapt to new requirements and knowledges even during the initial development.

Don’t Repeat Yourself (DRY) Principle: Every piece of knowledge must have a single, unambiguous, authoritative representation within a system. (p. 49)

4Is:

  • Imposed duplication (forced by environment: require documentation, restricted by programming language, libraries and development environments.)
  • Inadvertent duplication (developers is not aware)
  • Impatient duplication (duplicate because it’s easier)
  • Interdeveloper duplication (collaboration in or outside a team)

Documentation is often a duplication of the same knowledge to the code.

Imposed Duplication

Use code generator or simple filter to generate the shared structure into different forms. This could be done by using database schema or metadata. It will avoid the need of maintenance on the structure on two or more platforms especially if they are using different programming languages.

Bad code requires lots of comments. (p. 51)

Reserve comments to high-level explanations. Avoid using comments in elaborating the low-level knowledge. Leave them to the code as the comments will inevitably outdated.

Utilise the documentation, use it to generate tests or codes.

Comment on header files about interface issues, comment on source files on the details about the implementation or the code.

Inadvertent Duplication

Normalise the data according to the business logic and requirements.

When performance matters, especially when cache is needed, make sure that the impact due to the change of data localised in the function or class. The outside world will not have to worry about this kind of violation.

Getter and setter allows for further functionality expansion such as caching.

Impatient Duplication

Shortcuts make for long delays. (p. 55)

Interdeveloper Duplication

To avoid duplication at high level, have a clear design, a strong technical project leader and well-understood division of responsibilities within the design.

Set up a friendly environment for module reuse. Have a mean of communication channel that is easy to access where the history of the exchange are kept permanently. Have a project librarian who solely focuses on facilitating knowledge exchange. Put modules that can be reused, whether it is a utility routines or scripts, into a central place in the repository. Set up a policy to have people read other’s source code and documentation informally or during code reviews.

If it isn’t easy, people won’t do it. (p. 56)

Code That’s Easy to Test (Noted)

The key to reusability and modular programming is testability.

Testing should be done in an isolated environment, that is under controlled conditions, where we’ll check the return value of the component against known values or results from previous runs of the same test.

Unit test is code that exercises a module. (p. 190)

Test cases should honour its contract (202206301938#) where we can check whether the code meet the contract or the contract means what we think it means.

Test subcomponents first before testing on the main module.

Design both module’s contract and the code to test that contract during module design and before committing to its implementation.

Make test facilities easily accessible.

Test facilities provide 2 things: example on how to use the modules and a mean to build regression test.

Run tests frequently.

Test harnesses should include the following capabilities:

  • a standard way of set-up and clean-up
  • a method for selecting individual test or all available tests
  • a means of analysing output for expected or unexpected results
  • a standardised form of failure reporting (via logging)

Tests should be composable; that is, a test can be composed of subtests of subcomponents to any depth, so that we can easily select which kind of tests that we should run.

Don’t toss away simple tests written during debugging. Formalise them into the test suit.

To test a production application (software that has been deployed, in running state), uses logging (trace messages), hot-key sequence (pop-up diagnostic control window), and Web server (view internal entries and log status, have debug control panel).

Log messages should be in a regular and consistent format so that it is easy to read and parse.

To-dos:

  • Ruthless Testing (large scale)
  • Design by Contract (What is contract?)

Ruthless Testing (Noted)

Test early. Test often. Test automatically. (p. 237)

Write the test at the same time or even before the writing of the production code. This practice will find the bugs earlier which is cheaper to fix. Smalltalk community refer this as “Code a little, test a little”. Similar phrase from eXtreme Programming also echoing this concept (“continuous integration, relentless testing”).

The code can be said is done after all the tests ran.

Unit test is the foundation to all testing.

Integration testing tests the interaction of major subsystems in the project between each other. The contract set upon the subsystem must be honoured.

Validate and verify whether the system is adhered to the user’s need and functional requirements of the system. Aware of the end-user access patterns and how they differ from the test data.

Detect environmental limitations and if possible, recover from the failure. When the system fails, make it fails gracefully with nice error messages, or try to preserve its state in order to prevent loss of work.

Performance testing, stress testing and testing under load should meet the performance requirements under real-world conditions. Specialised testing hardware or software might be needed.

Usability testing should be done in terms of human factors under real environment conditions.

Validation and Verification, and Usability Testing should be done as early as possible.

Regression testing compares the output of the current test with previous values to find out the disparity of performance, contracts, validity etc., if there is any, between them.

Test data can be gathered from real-world or artificially created. Real-world data (or user data) can come from an existing system, a competitor’s system or a prototype. Artificial data (or synthetic data) is desired either a large amount of data is needed, to stress the boundary conditions, or exhibits certain statistical properties (such as algorithm runtime).

Decoupling allows rooms for more modular testing.

Sabotage the testing by deliberately causing the bugs for the tests to catch. You can have a project saboteur responsible for this entire process with separate source tree.

Full (100%) coverage is impossible.

Test the state of the program, not the line of code.

Run tests (unit test, integration test or test doesn’t require specialised equipment or environment) before checking the code.

Once a human tester finds a bug, it should be the last time a human tester finds that bug. (Find Bugs Once) (p. 247)

To-dos:

  • Design by Contract (What is contract?)
  • Decoupling and the Law of Demeter (Advantages of decoupling)
  • Ubiquitous Automation

Software Entropy

Entropy: the amount of disorder in a system.

One small rotten part can be spread as it instils a sense of abandonment if it didn’t fix in a short period. It can be originated from bad designs, wrong decisions or poor code.

Design by Contract (Noted)

A contract defines rights, responsibilities and repercussion of a software modules.

A correct program is the one that do no more and no less than it claims to do.

Every function and method need to satisfy their expectations and claims. These expectations and claims are preconditions, postconditions and class invariants. If they can’t meet them, an exception or error must be raised.

Preconditions are what must be true in order for the routine to be called. (routine’s requirements) This is checked by the caller.

Postconditions are what the routine is guaranteed to do. (the return state of the routine) This should be checked by the routine itself. Be aware that the parameters could be change inside the function or method depending on the language implementation.

Class invariants is a condition that remained unchanged to constrain the objects of the class, and it must always be true before and after the call of the method.

Be strict in what you will accept before you begin, and promise as little as possible in return. (p. 111)

The new subclass must supports the same method and that method retains its meaning without much alteration, though it can accept wider range of input or make stronger guarantees than its parent. Put such contract on the base class, which will be applied on all deriving classes.

Crashing program early is a good idea.

Loop invariants ensure that a condition is true (remain unchanged) before and during the execution of the loop.

Example of implementation of DBC in libraries or programming languages are Nana (DBC library for C/C++), iContract (DBC library for Java, might a bit too old), Cofoja (modern DBC library for Java) and Eiffel (a programming language that design around DBC).

To-dos:

  • Decoupling and the Law of Demeter
  • Dead Programs Tell No Lies
  • Temporal Coupling

It’s Just a View (Noted)

Use events to signal changes from one sender to many receivers. (one-to-many relationship) Sender doesn’t need to have any explicit knowledge of the receiver, and the receivers can have their own agenda different from each other.

If receiver has to handle more than enough or even all the events that they don’t need, this increase the coupling since it violates the object encapsulation (having knowledge# about them in order to interate with many objects). Such thing should be avoided.

Publish/subscribe protocol to control the event flows. Receivers should register themselves to a publisher (object) that they are interested. The publisher will keep track of who is subscribing to it, and call each of them in turn and notify them the occurrence of an event if it produced the event. If the receiver decided to unsubscribe from the publisher, the publisher should take note of this and remove that receiver from its list of object that should be notified.

Publisher is the producer of events. Receivers are the consumer of events that they need.

There are many implementations of publish/subscribe protocol: peer-to-peer, centralisation (software bus like CORBA Event Service, an object responsible to maintain a database of listeners and distribute the messages accordingly) and broadcast (regardless of registration).

CORBA Event Service utilises the event channel (a common software bus shared by other objects) to allow pushing and/or pulling of event notifications. In push mode, the publisher inform the event occurrence to it, and the channel will distribute it to all the registered subscribers. In pull mode, the subscriber will poll the channel periodically where the publisher will respond to it by sending the event data if such event happened.

Model-View-Controller (MVC) idiom separates the model (abstract data model, represent the target object) from the user interface (view, GUI/TUI/CLI, way to interpret the model, subscriber to model changes and controller events) and the controls (way to handle the view and provide the model with new data, publisher of events to model and view) that manage the view.

The model can be from multiple sources. These data should be formalised in the way it is easy to parse and read.

The viewers can be models for the higher-level object.

Each model may have multiple viewers, each viewer may interact with multiple models. (many-to-many relationship)

Controller can be used to prevent overwhelming views to increase the usability.

To-dos:

  • Reversibility
  • Blackboards (Decoupled more)
  • Decoupling and the Law of Demeter

Decoupling and the Law of Demeter (Noted)

Shy code: don’t reveal yourself to others, and don’t interact with too many people. (p. 138)

A good codebase should organised in a way that the interaction between the modules is limited so if one get comprised, the others remained unaffected.

The user should not interact with third-party modules directly. There must be a general “contractor” that encapsulate such dependencies, and act on the user’s behalf.

High coupled modules result in the propagation of changes through unrelated modules due to a simple change to one module, and the reluctance of developers to change code due to the uncertainty of the affected area.

Follow the Law of Demeter @Lieberherr1989#, i.e. any method of an object should call only methods belonging to itself (self or functions defined in the class), parameters passed into the method and objects created within the method or its class. (See more in https://www.ccs.neu.edu/research/demeter)

To-dos:

  • Reversibility

Blackboards (Noted)

The characteristics of a blackboard:

  • Modules doesn’t need to know each other existence
  • Modules watch the board for new influx of information, and add their own if needed
  • The data or information may come in many forms such as integer, graphics, data structure and so on
  • Modules may have vastly different responsibility between each other
  • The lifetime of the modules could be varied, and interact with the blackboard in different period

Examples: JavaSpaces and T Spaces. They are both based on the tuple space.

Prototypes and Post-it Notes (Noted)

Prototypes can be code-based or just written down to post-it notes (sticky notes). Sometimes, drawing can be another way to develop the prototype, e.g. GUI.

Post-it notes can be great to illustrate dynamic things such as workflow and application logic.

Prototypes are cheap and fast to be developed as they ignore most of the details such as correctness (use dummy data), completeness (may function only in a very limited sense, only work on a given input data or only one menu item), robustness (incomplete error checking, great crashing possibility) and style (little to no comments and/or documentation). They are designed to answer just a few questions.

Build prototype only when necessary. Otherwise, try tracer bullet.

The following things can be prototyped:

  • Architecture (how the system hangs together as a whole)
  • New functionality in an existing system
  • Structure or contents or external data
  • Third-party tools or components
  • Performance issues
  • User interface design

Prototype to learn. (p. 54)

Use a high-level language, higher than the rest of the project (Perl, Python, or Tcl), to develop the prototype. For interface, the author recommends Tcl/Tk, VB, Powerbuilder or Delphi.

When building an architecture prototype, inspect whether the major components’ responsibilities and collaboration between them are well-defined and appropriate. Investigate the coupling (is it minimised?) of the architecture. Find out whether there are potential sources of duplication. Are the interface definitions and constraints acceptable? Pay attention to modules’ accessibility (access path and the access) to the data it needs during the program execution.

Prototype is disposable codes. Do not try to incorporate directly into the production environment.

To-dos:

  • Tracer Bullets (another approach)
  • Great Expectations

Tracer Bullets (TODO)

Tracer Bullets Development is an incremental approach to a project development. This means that we are not writing disposable codes, error checking, structuring, documentation and self-checking should be treated with care albeit not fully functional.

Prototype is about building the functionality, tracer bullets is about building a lean structure to developed with.

Illuminate the codebase instead of coding in the wonderation or trying other complicated set-up.

Look for something that that get the team from a requirement to some aspect of the final system quickly, visibly, and repeatably.

To-dos:

  • Great Expectations

Dead Programs Tell No Lies (Noted)

Don’t fall to the “it can’t happen” mentality.

A dead program is better than a crippled one.

Always check the return value of a function or method with the expected value. Crash the program as soon as there’s a problem.

\ int rc = LINE;                                        \ if (rc != EXPECTED)
\ ut_abort(__FILE__, __LINE__, #LINE, rc, EXPECTED);  \ }

void ut_abort(char *file, int ln, char *line, int rc, int exp) { fprintf(stderr,
"%s line %d\n'%s': expected %d, got %d\n", file, ln, line, exp, rc); exit(1); }

To-dos:

  • When to Use Exceptions

Assertive Programming (Noted)

If it can’t happen, use assertions to ensure that it won’t. (p. 122)

In C and C++, we can do assertions using assert or _assert.

Never put code that must be executed into assertions.

Assertions are not a mean of error handling. Error handling check things that could happen, assertions check things that must not happen.

Avoid side effects in assertions at all costs.

Even though assertions can add overhead to the program, turning them off when building the binary is a bad idea since it assumes tests alone would find every bug in the codebase which is not the case.

Reversibility

There are no final decisions. (p. 46)

Maintain the flexibility of the architecture, deployment (stand-alone, client-server, n-tier model) and vendor integration. Abstract them into well-defined modules or interface.

Stick to the practice of 202206171004#, # and decoupling# to reduce the numbers of irreversible decisions.

Technologies such as 202207041138# can help.

Change the system deployment by just changing the configuration file. (an idea)

To-dos:

  • Metaprogramming

When to Use Exceptions

Use #exception only under exceptional circumstances. For example, if the file is required for the appropriate operations performend by the program, an exception must be thrown if it doesn’t exist. Otherwise, a simply error return should be fine.

Exception imposes an immediate, nonlocal transfer of control, which can disrupt the flow of control. It can degrade the readability and maintainability of the codebase as it makes the routines and their callers tightly coupled.

An error handler can be handy when there is no exception handling language facility or the exception handlings are tedious. If an error is detected, error handler, which is a routine, will be called to handle the error. The error handler can just handle a specific category of errors.

Ubiquitous Automation (Noted)

Don’t trust manual procedures, as they are susceptible to inconsistency and lack of repeatability.

Automation by shell script or batch file which is put under source control.

cron (Unix) or at (Windows) to schedule tasks that need to run periodically.

Utilise the build system (in the book Makefile) to build, generate code or documentation and run 202206201335#.

Full build should run all available tests.

Run tests regularly.

Different view of the documentation (website, Markdown, PDF) should be maintained by an automated script adhered to the 202206171004#.

To-dos:

  • Code Generators
  • The Power of Plain Text

Code Generators (Noted)

Two types of code generators: passive and active.

Passive code generator is basically parameterised templates which generate a given ouptut from a set of inputs. It only needs to be run once, and after that, the result will become the source file in the project. It can be edited, compiled, and placed under source control without the help from the code generator.

Uses of passive code generator:

  • creating new source files (templates, source code control directives, copyright notices, standard comment block)
  • performing one-off conversions among programming languages
  • producing lookup tables and other resources

Active code generator is often used as a bridge between two disparate environments in avoidance of violating the 202206171004#. Typically, there will be a schema as an input for the code generator. It will produce into either two different programming languages or forms according to the given arguments. When the schema change, the result will reflect the change made. Therefore, active code generator, in contrast its passive counterpart, need to be run more than once, ideally during the build process.

To-dos:

  • Evil Wizards (passive code generators and CASE tools)
  • The Power of Plain Text

Evil Wizards (Noted)

Wizards: third-party passive code generators

Although having a wizard to produce tons of code can save up a lot of works, if you don’t understand the code it produced, it is better to not use it. The reason is if its output is a bit off or the circumstances changed, you are on your own to do the changes. However, if you insist on using the wizard code, you won’t be in control of the application that you developed, and the maintainability of it could be disastrous especially when debugging.

To-dos:

  • Programming by Coincidence

Metaprogramming

Make the system developed highly configurable (in runtime). Express the options in metadata (data that describes the application), not in code. The metadata should be made available during the runtime of the program, so that the program can access and use it without the need of recompilation.

Metadata decides how should an application run, what resources should it use etc. It should be #declarative (what is to be done).

Put abstractions in code, details in metadata. (p. 145)

The benefits of the approach:

  • Enforce decoupling
  • Enforce a more robust, abstruct design (deferring details)
  • Customisation without recompiling
  • Options expressed in a #declarative manner rather than full fledge programming langauge
  • Reusable

Encode a rule-based or expert system for dynamic business environment.

Domain languages can be used as mean to construct metadata.

Example: Enterprise Java Beans (EJB)

To-dos:

  • Domain Languages (constructing metadata)
  • The Power of Plain Text (in configuration)

The Power of Plain Text (Noted)

Plain text is made up of printable characters which can be easily read and understood by the people. It can be unstructured, such as in TXT or Markdown, or structured, such as XML, SGML andn HTML.

Keep knowledge in plain text. (p. 74)

There are two main drawbacks of using plain text:

  • require more storage
  • may be computationally more expensive (parse and process)

If there is a performance concerns, one can put the data into binary formats, and use plain text as the view of those binaries as in the case of Solaris.

Store metadata about the raw data in plain text.

If security is a concern, encrypt or hash the data.

Even in the legacy system, plain text will not be outdated. Storing data into a plain text format, which is readable and self-describing, will drastically help the maintainer especially if they are new to the system. Parsing such data should not be a hassle.

Plain text is cross-platform, cross-langauge. Everything can operate on plain text.

It is easier to test using plain text as a mean to store synthetic data since it is easy to add, update or modify. There is no need for a dedicated tool to it. For 202206201335#, if its output is in plain text, the analysis can be trivial by just comparing the difference to the previous version of the file, or if necessary, pass it to Perl, #python or other scripting languages.

The Unix Philosophy is heavily relied on plain text.

Domain Languages

Have a mini-language (better executable) that is human readable and understandable to capture users’ requirements or specification. So that the end users could program much closer to the application domain and ignore implementation details. If there is any error on parsing and/or processing the user configuration using the mini-language, make sure the error message is clear by using the vocabulary of the domain.

Different users have different problem domain.

To implement a mini-language, define the syntax first using a notation such as Backus-Naur Form (BNF). Later, convert it into the input syntax for a parser generator, such as yacc, bison and javaCC. See details of those programs in the book Lex and Yacc. The alternative is to extend an existing high-level language such as #python, Perl, Lua etc. to handle application-level functionality.

Two types of domain-language based on their intended purpose. Data language for simple configuration as they present a data structure to be used by the application. Imperative language provides greater flexibility to the users for that it can be executed, can contained statements, control constructs and other more complex instructions.

Domain-language doesn’t need to be used by the application directly. It is used to create a data structure, metadata or artifact that could be used as an input by the application later on.

To-dos:

  • Text Manipulation
  • The Requirements Pit (wirte code using vocabulary of the application domain, project glossary)

Text Manipulation

Text manipulation tools: awk, sed, #python, Tcl and Perl.

There are several used of those tools:

  • Producign database schema definition into different languages
  • Automatically add a getter and setter for each attribute inside a class
  • Generate test data in a uniform format from different files
  • Author’s use of book writting especially for the codes shown so far
  • Generate interface source codes from headers even if they are in different languages
  • Generate web documentation

202207132124# could be written in the above mentioned languages.

Great Expectations

An application correctly implements its specifications doesn’t mean that it is successful. If it doesn’t meet the user expectation, it can still fail, even if the deliverables are perfect.

A child opens an expensive Christmas present and bursts into tears - it wasn’t the cheap doll the child was hoping for. (p. 255)

Don’t ignore user’s vision of the project even if it is incomplete, inconsistent or technically impossible. Understand their requirements and communicate with them. Let them know what you’ll deliver to them and how will you develop the feature. Both Tracer Bullets Development and 202207120959# are great in showcasing what your understanding of their requirements.

Try to surprise your users. Delight them, not scare them. Gently exceed the users’ expectations. Give them a little bit more than they were expecting. Add somethings that is relatively easy to implemente and look good to the average user. Those small things should not bloat or break the system.

To-dos:

  • Communicate! (with the users)
  • Good-Enough Software
  • The Requirements Pit

Temporal Coupling (Noted)

Thinking in concurrency (things happening at the same time) and ordering (the relative positions of things in time) aspect of the program. From here, we are led to a smoother workflow, cleaner architecture, clearer interface, and better performance for the program that is easy on user and perform well as we’re getting rid of time or order dependencies. Thinking it in a linear mindset could result in a temporal coupling.

Temporal coupling, that is coupling in time, happened when tick must happen before tock.

Use UML activity digram to capture the workflow (business requirements and logic). We can identify which tasks or actions could be happened in parallel and which of them should be synchronised before proceed.

Hungry consumer model: Instead of using a central scheduler among consumers, they should be independent from each other and other components in the system. They will consume things from a work queue without disturbing others’ business, and if they finished, they will grab some more from the queue. If one consumer task get bogged down, other will pick up the slack, and every one could proceed in their own pace.

Design using services. (p. 154)

Service: independent, concurrent objects behind well-defined, consistent interfaces. (p. 154)

Objects must be in a valid state when called. If it is not valid, it is often caused by the constructor which doesn’t leave the object in an initialised state.

Design with concurrency in mind allows more rooms for scaling and performance optimisation. Going the other way around, that is from non-concurrent system to concurrent system, is much harder.

To-dos:

  • Programming by Coincidence

Programming by Coincidence (Noted)

Avoid programming by coincidence, do programming deliberately.

Know why the code works. Test# it thoroughly. Don’t assume the code will work as intended just because it’s the way it is currently written.

Modularise and hide its implementation behind small, well-documented interfaces the code that will be called by others guided with the 202202041514#.

Rely on documented behaviour# whenever designing a routine. If can’t, then document your assumption.

Questioning the context: it is necessarily for the module to rely on certain requirements? Is there any assumption made for the module to work?

Testing could suffer from false causalities and coincidental outcomes. Advice: don’t assume it, prove it.

To program deliberately, do:

  • Always be aware of what you are doing
  • Don’t code blindfolded (understand the requirement and technology used)
  • Proceed from a plan (CASE can assist)
  • Rely only on reliable things (don’t depend on accidents or assumptions, assume the work if you can’t tell the difference)
  • Document your assumptions#
  • Test code and your assumptions
  • Prioritise your effort especially on important aspects (most likely to be the hard parts)
  • Don’t be a slave to history (all code can be replaced, be ready to refactor#)

To-dos:

  • Debugging (dont’ assume it, prove it)
  • Stone Soup and Boiled Frogs (synergy)

Debugging (Noted)

Attacking debugging as logic puzzle that need to be solved, not as a blaming game.

Don’t panic. (p. 91)

Don’t think “that’s impossible”#.

Don’t just fix the symptoms. The actual fault might be several steps away from what you have observed. Try to find the root of cause of a bug.

First thing first, set compiler warning level as high as possible. Let the compiler do its jobs. If there’s no warning produced, it means that there is no need to waste time on easy fixes. Focus on harder problems.

Interview the user who reported the bug to gather sufficient level of detail about the dysfunction. Furthermore, this could mean that test assets# don’t cover enough area of the application. Write thorough test based on boundary conditions and realistic end-user usage.

Try to reproduce the bug with a single command.

Visualise the debugging process:

  • simply print out the value of the variable (using print function or GUI popup)
  • debugger with visualisation of data
  • paper-and-pencil
  • external plotting programs (gnuplot)
  • DDD (Data Display Debugger) debugger for Ada, #c, #cpp, Fortran, #java, Modula, Pascal, #perl, and #python

In a multithreading environment#, real-time system an event-based application#, tracing statements (printing messages as simple as “got here” or “value of x is 2” to the screen or a log file) are more helpful than a debugger (with only stack trace, the state of a program now). They should be in a regular, consistent format so one could read or parse them easily.

If there is a corrupted variable, check their surrouding memory area.

Explain a bug or a problem to a “rubber duck”, which could be a literal rubber duck, a colleague, or anyone who is willing to listen to you. They don’t need to speak, they just need to nod or quack.

Though there could be a bug exists in the OS, the compiler or a third-party components such as library, always first assume that the bug is coming from the application itself. Slowly eliminate line by line (binary search) in the application codebase, then you can suspect there’s a problem in the external environment if there’s no bug in the codebase.

If there is something that surprise you, reevaluate your assumptions. Don’t just “know” your code works, prove it, in this context, with this data, with these boundary conditions. Add new test# to it, 202207091736# or put some 202207091744#.

Don’t assume it, prove it. (p. 97)

Communicate with the team if the bug is resulted in someone’s wrong assumption.

Checklist:

  • Is the problem a direct result of the bug, or merely a symptom?
  • Is it in your code or in the external environment such OS, compiler or third-party library?
  • What would you say to explain the problem in detail to a rubber duck?
  • If the unit test for the suspected code passes, are the tests complete enough? What happen if you run the same test with this data?
  • Do the conditions that caused this bug exist anywhere else in the system?

The Requirements Pit

The needs formulated by the user, often called requirements, could sometime be vague. Assess and analyse it with care. Don’t embed business policy in an absoluate statement, as the policy might change in the future. Instead, turn them into metadata and document them separately from the requirement. Make sure that the requirement remain abstract (try making the simplest statement that accurately reflects the business need).

Solve the business problem, not just meet the stated requirement.

Work with a user to think like a user. (p. 204)

Sometimes the interfacce is the system. (p. 205)

Documenting the requirements using use case template or diagram. They are effective mediums for communication.

Track requirements via tracking feature requests and approval to avoid feature bloat.

Maintain a glossary where terms and vocabulary are defined in the project to prevent misunderstanding. #Domain Languages

Web-based documenation should be perfered.

To-dos:

  • Stone Soup and Boiled Frogs
  • Good-Enough Software
  • It’s All Writing

Stone Soup and Boiled Frogs

Be a catalyst for change. (p. 8)

Bring people together to produce a synergistic result by leveling down their guards. Work out what you can reasonably ask for, then show the end result to them. Suggest something that could make it better but pretend it’s unimportant. Wait for them to start asking you to add the suggested funtionality.

Don’t be like the frog (in a gradual boiled water). Keep an eye on the big picture. Constantly review what’s happening around you, not just what you personally are doing. (p. 9)

To-dos:

  • Pragmatic Teams

Good-Enough Software

Make quality (of the software or product) a requirements issue. (p. 11)

Give users the opportunity to decide when the produced software is good enough.

Treat programming like painting. Learn how to stop. Don’t over complicating and refining the software. Just like an art, nothing could be perfect.

Communicate!

Plan what you want to say. Write an outline. Then ask yourself, “Does this get across whatever I’m trying to say?” Refine it until it does. (p. 18)

Know the needs, interests, and capabilities of your audience. Different audience will want to hear different kind of contents that is related to their role or responsibilities.

WISDOM:

  • What: What do you want them to learn?
  • Interest: What is their interest in what you’ve got to say?
  • Sophisticate: How sophisticated are they?
  • Detail: How much detail do they want?
  • Owner: Whom do you want to own the information?
  • Motivate: How can you motivate them to listen to you?

Understand the audience’s priority. Make your saying relevant in time, as well as in content. For example, if the manager lost their user data, you may propose your idea to establish a database system.

Different users prefer different kind of communication style. Adapt wisely. If it is impossible to do so, tell them.

Produce documentation with good style and layout.

Involve readers with early drafts of the document. Get their feedbacks, and pick their brains. (p. 21)

Be a listener. (p. 21)

Ask your audience questions, or have them summarise what you’ve presented. Turn meeting into a dialogue.

Always reponse to the people, even if it is just “I’ll get back to you later”.

To-dos:

  • It’s All Writing

Pragmatic Teams

Quality is a team issue. (p. 224)

The team should not tolerate any broken windows. In the contrary to the mainstream approach, we should instead encourage these small fixes. The team as a whole must take the responsibility for the quality of the product, not just putting the burden on particular individual.

Don’t assume someone will take over an issue and fix it. Don’t rely on team leader to make their decisions. Actively monitors the environmental changes that wasn’t in the original agreement, such as the increase of scope of domain problems, the decrease of time scales, the additional features added in, or new environments. Keep metrics on new requirements.

The team should have one unified external voice and identity. Prepare to communicate with outsiders. Have good documentation: simple, accurate and consistent.

Have a librarian responsible for documentation and code repositories coordination. They will be the first person to be asked if the team member is looking for something. Otherwise, appoint people as focal points for various functional aspects of the work especially if they are the important player in that field.

Organise around functionality, not job functions. (p. 227)

Don’t separate analysis, design, coding and testing! They are actually different views of the same problem. Instead, organise around module: each subteam will be responsible to one exact module or classs.

A typical team needs two heads: one technical, one administrative. The technical head will set the development philosohpy and style, assign responsibilities to teams and initiate dicussions between members. They will also look at the bigger picture and try to eliminate factors that reduce the orthogonality of the team. the administrative head or project manager will schedules the resourcse that the team needs, monitors and report the progress, and help decide priorities in terms of business needs. They can be the ambassador of the team to the outside world.

Larger projects might need a librarian and a tool builder (build automation).

It’s All Writing (Noted)

Comments# should be used to describe why something is done, its purpose, its goal, its engineering trade-offs, what’s other alternatives were discarded etc. Don’t document how it is done, it violates the 202206171004# as the code itself should already explain it.

The authors recommend that a source file should contain a simple module-level header comment, comments for significant data and type declarations, and a brief per-class and per-method header (describe how the function is used and anything that it does that is not obvious).

Choose meaningful name for variables.

The authors recommend against documenting down a list of the functions exported by code in the file (use external program), revision history (use #vcs, though documenting the data of last change and the person who made it, that is the owner, is desirable), a list of other files this file uses (use external program), and the name of the file (use external program like RCS).

Use tools such as Javadoc and DOC++ (seems outdated) to generate API-level documentation for the code base.

Treat documentation as a view of the code base adhere to the 202207041054# model unless you treat it like a model.

If the documentation is the model for the code base, use external tools to produce different views of it, such as database schema and programming language record etc. #literate-programming

Work on the model, update all views automatically.

Prefer web publishing as the method to publish documentation as it will be always up-to-date.

Use one of the markup languages (authors recommend DocBook, an SGML-based markup language, which is used by Linux Documentation Project) to write documentation, and then by utilising external tools like Pandoc, we could transform it into different format, such as PDF, troff, slide shows, web pages etc.

To-dos:

  • Pride and Prejudice

Pride and Prejudice

Own your codes, that is, sign it as yours and take responsibility of it.

When in a team, treat other’s code with respect. Lay down a foundation of mutual respect among the developers.

Kent Beck, an advocator for eXtreme Programming, suggested a communal ownership of code accompany by pair programming. It encourages the pride of ownership and at the same time guard against anonymity which could breed sloppiness, mistakes, sloth and bad code.

Source Code Control (Noted)

Source code control system (SCCS)# or configuration management system# should be used in order to keep track of every change happen in the code base, whether it is about the source code or the documentation, and even the version for the #compiler and #operating-system. So, you have the ability to undo some changes happen in the past and/or go back to the previous version of the software due to inaccuracy or build failure.

Furthermore, SCCS offers more features such as noting the author of the change (Who change the code), showing the differences between different versions or the changes made by this version, tagging the releases for future reference, and easing the development tree management.

SCCS eases the management of the development tree by allowing the developers to create a new branch in the development tree without touching the main trunk. Some even allow the ability to backport the changes in the main trunk to different branches and vice versa.

SCCS is great for archiving because of the central repository.

Some SCCS allow concurrent changes using the merge feature.

The authors encourage the use of source code control in everywhere, even if you are a single-person team on a one-week project, prototypes#, documentation etc.

SCCS enables #automation opportunities.

Recommendation of tools:

  • GNU Revision Control System (RCS)
  • Concurrent Version System (CVS)
  • Aegis Transaction-Based Configuration Management
  • ClearCase
  • MKS Source Integrity
  • PVCS Configuration Management
  • Visual SourceSafe
  • Perforce

Shell Games

Command shell is a powerful tool for the programmers to use. Due to the piping capability of the shell (at least in Unix environment), it is possible to build a complex macro for automation.

GUI is great since it is straightforward to use and operate. However, it doesn’t allow automation and has limited options for customisation as GUI is limited to the capabilities that its designers intended to.

Power Editing

The authors recommended to use one editor very well for all editing tasks, such as code, documentation, memos, system administration etc., instead of relying on different tools. The tools that you use could have different interfaces and keystrokes, which add some confusions when you are switching around these tools.

The editor should be available on all platforms.

The editor used should be configurable (adjusting user’s preferences), extensible (able to adapt to new programming languages) and programmable (performing multistep and complex tasks). Big bonus to the editor if it has the features such as syntax highlighting, auto-completion, auto-indentation, initial code or document boilerplate, tie-in to help systems or IDE-like features (compile, debug and so on).

Do something as fewer keystrokes as possible when editing.

The author recommends several editors to use:

  • Emacs#
  • XEmacs
  • vi
  • Vim
  • elvis
  • CRiSP
  • Brief

Algorithm Speed (Noted)

Use Big O notation# to mathematically approximate the upper bound of the algorithm speed or memory usage. Its limitation is that there’s no way telling if there are differences between two algorithm speed or memory usage with the same growth rates#.

Estimation is quite simple with Big O notation. Simple loops often indicate a \(O(n)\) algorithm speed. If it is nested instead, it is probably a \(O(n^2)\) time algorithm. If the algorithm halves the input each time around the loop, it will perform \(O(\log 2)\) in worst case scenario. However, if the algorithm also need to combine the output after halving, it is most likely to be a \(O(n \log n)\) algorithm. If instead, the algorithm start looking at the permutations of things, it is possible to a \(O(C^n)\) of speed or memory usage.

If you are not sure, try running it, varying the input record count (or others that could impact the runtime performance of the algorithm), and plotting the result.

Lower order of algorithm speed or memory usage doesn’t mean it is necessarily better than the higher one on small input.

Code profilers# can be useful to count the number of times the different steps in the algorithm get executed. Some might even provide a graphical plot with the information against the input size.

Estimating

Estimate to avoid surprises. (p. 64)

Sometimes, high accuracy is desirable for estimations, sometimes it is not. It is solely depending on the context. Use units to express accuracy. The smaller the unit used, the more it feels more accurate.

Ask someone who has the experience on the problem domain. Draw the estimation from there.

Define the scope of the problem domain before making an estimation.

Build a model of the system, and depending on the needs, you might or might not want to trade simplicity for accuracy. Break the model into components, and then inspect their role in affecting the model performance or functionalities, typically in the form of parameters. Try to discover the #math rules that describe how these components interact. Assign a value to each parameter (should be reasonable) and see how it goes.

It is often to find out that we are basing an estimate on other subestimates.

Run multiple calculations, varying the values of the critical parameters, until you work out which ones really drive the model. (p. 67)

Don’t dismiss the answers during the calculation phase if you find it strange. Inspect if your arithmetic is right. If it is, then the understanding of the program or the model is likely incorrect.

Record estimates and its subestimates.

If the estimate is wrong, find out why.

Practice incremental development. Refine your schedule after each completed iteration.

Iterate the schedule with the code. (p. 69)

Don’t give the estimate outright, take some time to calculate it.

Solving Impossible Puzzles (Noted)

Don’t think outside the box – find the box. (p. 213)

The box is the boundary of constraints and conditions. And the box might be larger than you think.

To solve a puzzle, you have to identify the real constraints, not imagined, and find a solution therein. There are absolute constraints that need to be honoured regardless of how ridiculous they are, and preconceived notions that serve merely as a distraction.

Know how much freedom you do possess in solving the puzzle.

Challenge any preconceived notions and evaluate whether they are real constraints. Prove it if you think that one particular path can’t be taken.

Find out the most restrictive constraints first, then fit the remaining constraints within them.

The authors proposed to ask ourselves the following problems when you think you’re taking a wrong path:

  • Is there an easier way?
  • Are you trying to solve the right problem, or have you been distracted by a peripheral technicality?
  • Why is this thing a problem?
  • What is it that’s making it so hard to solve?
  • Does it have to be done this way?
  • Does it have to done at all?

Not Until You’re Ready (Noted)

When in doubts, slow down your pace, take your time to think what is the doubts. A good way to crystallise the doubts, suggested by the authors, is to develop a prototype for it. If you felt bored after a period of developing it, it is probably nothing wrong about the concept. However, it is a chance for your brain to realise whether some premises or assumptions are wrong about the concept.

The Specification Trap

Specification should not and will not capture every detail and nuance of a system. It only serves as a restriction onto the creativeness of the programmers instead of assisting them. Specification should be treated as a view of the requirement, just like the code implementation, adhering to the DRY Principle# design principle.

Natural language will eventually meet its limitation like in those field where precision is what matter such as laws, #philosophy and #science. It will be bent in an aliens way in order to make way for exact meaning. Don’t treat natural languages a better explanatory medium compare to programming languages.

Some things are better done than described. (p. 218)

Prototyping# or Tracer Bullet Development is a great tool to break the specification spiral.

Circles and Arrows

Don’t be a slave to formal methods. (p. 220)

Be critical to formal methods about their claims on improving team performance. Apply them into the context of your development practices and capabilities, and figure out whether they are suited. Even if they are desirable, don’t underestimate the effects of adopting to new practices. It could result in a significant drop in productivity and quality before the team can enjoy the benefits brought by the formal method. Treat them as one of the tool in the toolbox, and meld them into the existing working practices instead of with force.

The authors point out there are several shortcomings of formal method. First, most formal methods capture requirements using a combination of diagrams which has designer’s bias on understanding the requirements, that is not end-user friendly. Second, formal methods tends to favour specialisation which could lead to poor communication and duplicated effort as they discourage people to understand the system as a whole. Third, most formal methods treat relationships between objects as static rather than as dynamic, which could hinder the flexibility and dynamism of the system.

Expensive tools do not produce better designs. (p. 222)

Your Knowledge Portfolio

Knowledge Portfolio: all the facts programmers know about computing, the application domains they work in, and all their experience. (p. 13)

The authors recommend managing knowledge portfolio as follows:

  • Make learning as a habit
  • Diversify the knowledge for long-term success, don’t rely on current learned technology even if they are dominant today
  • Balance portfolios between conservative and high-risk, high-reward investments
  • Learning an emerging technology could be a good choice, assuming it has the potential to become mainstream one day
  • Review and rebalance portfolios periodically

The authors suggest:

  • learn at least one new language annually
  • read a technical book each quarter
  • read non-technical books to seek the human-side of the question
  • take classes at local community
  • participate in local user groups to see what people are working on
  • experiment with different environments
  • stay current by subscribing to magazines and journals
  • find newsgroups (community that share a common interest) to get wired

The process of learning will expand your thinking, opening you to new possibilities and new ways of doing things. (p. 15)

If you realise that you know little or nothing on a particular thing, find the answer, whether it is gained through surfing the web, asking people on the Internet or going to the library. If you can’t find the answer yourself, however, you should find out who can. Ask them about it while at the same time expand your personal network.

Critically analyse what you read and hear. (p. 16)

The authors recommend several resources for updating our knowledge portfolio:

  • IEEE Computer (magazine)
  • IEEE Software (magazine)
  • Communications of the ACM (magazine)
  • SIGPLAN (magazine, published by ACM, focus on programming languages)
  • Dr. Dobbs Journal (magazine, terminated in 2014)
  • The Perl Journal (magazine focus on #perl)
  • Software Development Magazine (focus on project management and software development)
  • Slashdot
  • Cetus Links
  • WikiWikiWeb

How to Balance Resources

The routine or object that allocates a resource should be responsible for deallocating it. (p. 129)

To avoid possible confusion and errors, the following two practices should be implemented when coding in non-#oop style especially in programming languages that only support non-OOP paradigm:

  • Deallocate the resources in opposite order, so that double free would not be possible in the case where one resource contains references to another.
  • Allocate a set of resources in the same order if such pattern should occur in different places such as in multithreading environment# . It is a precaution to avoid possible Deadlocks#.

#exception could make Resource Management# tricky.

If it is not possible to eliminate the pointer type to an object, encapsulate the pointer into a wrapper class or structure.

There are three options on handling dynamic #data-structure :

  • The top-level structure is responsible for deallocating itself and any substructures that it contains. This means that it is necessary for the data structure to recursively deallocate the data that it owned.
  • The top-level structure is simply deallocated, leaving every structures that it point to orphaned (not pointed by anyone).
  • The top-level structure refuses to be deallocated if any of its substructures still alive.

In #c , write a module for each major structure and provide standard allocation and deallocation for them.

The author recommends the following tools for checking memory leaks:

  • Purify
  • Insure++

Miscellaneous

Always backup. (p. 24)

A loosely coupled system is easier to reconfigure and reengineer. (p. 59)

Separating infrastructure components (database, communication interface, middleware layer) from application. (p. 60)

Don’t rely on the properties of things you can’t control. (p. 61)

Run unit tests during build process to ensure orthogonality. (p. 64)

Is there really a need for global variables?

Abstractions live longer than details. (p. 209)

Test your estimates. (p. 182)

Links to this page
  • Version Control System (VCS)

    @Hunt1999 advices to use VCS in every places possible even in small projects, prototypes#, documentation# as it is of a great archiving tool to have.

  • Unix Philosophy

    In Unix, almost all things, such as users, passwords, networking configuration, are configured through and/or kept in plain text.

  • Test Harness

    A viable Test Harness consists of several functionalities: @Hunt1999

  • Test Driven Development (TDD)

    Note: Even with extensive testing, one could not avoid drawing a #naive assumption about a routine, resulted in false causalities and coincidental outcomes. Hunt et al. advices “Don’t assume it, prove it”.

    Hunt et al. recommends that both Validation and Verification of System Requirements# and Usability Testing# should be done as early as possible. And that Regression Testing# to be incorporated into the build system in order to compare it to the previous build.

  • SOLID

    Make a local inversion of dependencies in the lower level abstraction and then move it to the higher level abstraction. The user should not directly interact with low level modules, instead they should rely on a general contractor with high abstractions, that encapsulate such dependencies, and use the module on the user’s behalf. The Pragmatic Programmer See more in 202207041054# or 202207041156#.

    SRP brings two benefits: @Hunt1999

    Write shy codes: don’t reveal yourself to others, and don’t interact with too many people. The Pragmatic Programmer By that, the interaction between the modules or interfaces is limited so if one get comprised, the others remained unaffected. Make sure the implementation of the interface is adhered to the 202207031635#.

    Orthogonality: … We want to design components that are self-contained: independent, and with a single, well-defined purpose ([…] cohesion). When components are isolated from one another, you know that you can change one without having to worry about the rest. @Hunt1999

  • Rubber Ducking

    Let’s say you are debugging# a program and found a bug or a problem. Some might just keep their mouth shut and suffer in their own terms. Instead, Hunt et al. suggests for a companion! You can talk about the detail of the bug or problem to a literal rubber duck (if you don’t have a friend, which is relatable), a colleague, boss, or anyone who is willing to listen to you. Their role is to nod, quack, or simply making an agreeing sound, it doesn’t matter. By this way, you could organise and evaluate your thoughts more thoroughly and makes finding the solution easier. So grab a rubber duck, and take it to your workplace!

  • Relationship Between Problem Report, Symptoms and Defects

    Note: Interview the user who reported the bug in order to gather sufficient level of detail about the dysfunction. The Pragmatic Programmer

  • Refactoring

    The user experience should be taken into account before refactoring especially for library developers and/or maintainers on published interface. The changes must be made to be known, and if such changes are scheduled, the users need to know when will that take place. The Pragmatic Programmer

    Hunt et al. advised to refactor the codebase sooner than later before the task itself grow more tedious and larger. Regular refactoring should be encouraged as the drastic improvement on the design due to cumulative of small changes on the project (Fowler), and do not be slave to the history. Be ready to replace any code in the project.

  • Pure Function

    Pure Function in mathematical sense is a function that has no side effect, does not change the state of the program and always produces the same results when given the same arguments. I think that pure function is adhered to the concept of orthogonality#.

  • Locate a Bug
    use tracing statements (printing messages as simple as “got here” or “value of x is 2” to the screen or the following, suggested by Hunt et al.) and/or logging facilities
  • Law of Demeter

    The Law of Demeter introduced by Assuring Good Style for Object-Oriented Programs# states that each method or function can send messages to their preferred-supplier objects, that is, argument objects, itself, objects either created in the method or in global scope, and the immediate subpart of self such as elements or object instantiated within the class. This also means that they should call only methods belonging to itself, parameters passed into the method (class type object), objects created within the method or its class. The Pragmatic Programmer

  • HashiCorp Configuration Language (HCL)

    HCL is a declarative configuration language developed by HashiCorp where .tf is its well known file extension. It is mainly used in HashiCorp products such as #HashiCorp Terraform. It is one of the good example of domain language. The general syntax is looked like below:

  • Documentation Guide

    For documenting the requirements, try using use case templates or diagrams. They are effective mediums for communication, especially for the non-technical people. The Pragmatic Programmer showed an example use case based on Cockburn’s template:

    Note: Make sure the documentation is in good style and easy to read in terms of visual presentation. It is recommended by Hunt et al. to involve readers with early drafts.

  • Debugging Tools

    There are various debugging tools that a programmer could use to find out what is going wrong in the codebase. Some provides great visualisation of the data suggested by Hunt et al.

  • Crash Program Earlier

    Crashing the program earlier has two advantages: the program is dead, and the resources are reclaimed by the system. A dead program is better than a crippled one. The Pragmatic Programmer

  • Concurrency

    Hungry consumer model, advocated by Hunt et al., emphasises the use of services: independent, concurrent objects encapsulated behind well-defined consistent interfaces, instead of favouring a central scheduler among consumers#. In this way, they will consume tasks from a work queue without bothering others’ business, and if they finished, they will grab some more from the queue. Let’s say that a consumer task get bogged down because of large input data, others will just pick up the slack. It allows every consumer to proceed data in their own pace.

    That being said, designing a program with concurrency in mind will lead to a smoother workflow, cleaner architecture, clearer interface, and better performance, and allow more rooms for scaling and performance optimisation. The Pragmatic Programmer We can get rid of time or order dependencies by inspecting whether there are tasks or actions that could be happened in parallel with the help of tools such as UML activity diagram. Thinking it in a linear order instead will definitely result in a temporal coupling, that is, coupling in time, where tick must happen before tock.

  • Code Generator

    An active code generator is often used as a bridge between two disparate or different environments in avoidance of violating the #202206171004. It typically uses schema, often in a relatively simple configuration language# or just plain text#, as an input, to produce the defined form into two different languages (SQL and #cpp, for example). If there is a change in the schema, the result will reflect the change made, and produce the respective output.

    Code Generator is an automation tool that could generate codes (programming languages, markup language or SQL) based on a schema which can be in another language or just plain text#. divides code generators into two types according to their role in the source repository: passive and active.

    Note: Even though there are many powerful third-party code generators that can even produce the whole project skeleton, if one doesn’t understand the produced code, it is advised by Hunt et al. to not use it.

  • Box Evaluation

    However, the reality, as stated in The Pragmatic Programmer, is that people often overlook some important constraints and conditions and/or instead looking into their imagined constraints, that is preconceived notions. The box could be considerably larger than what people thought. Therefore, it is the primary task for the software engineer to find out the box, that is the real boundary of constraints and conditions.

    If you are not sure, thinking that you’ve took a wrong path or you simply don’t have time for the mind experiment, Hunt et al. suggest the following questions for us to evaluate:

  • Assertions

    In C and C++, the keywords assert and _assert are usually viewed as a Debugging Tools. However, as recommended by Hunt et al., if you ever feel that #something can’t happen, it is a great opportunity to ensure it won’t by using assertions. Even though assertions can add overhead to the program, turning them off when building the binary is a bad idea since it assumes tests alone would find every bug in the codebase which is not the case.

#documentation #oop #functional-programming #algorithm #data-structure #literature #refactoring #test #devops #) #exception #declarative #python #c #cpp #java #perl #Domain #vcs #literate-programming #compiler #operating-system #automation #math #philosophy #science #perl)