I was asked to give an overview in work of what Application Architecture means to me so...
ala moi "what works for me"
The following are a list of topics which come to my mind when you hear application archicture. This is a reflection of what I use, how I develop and the processes which I follow.
How do I start?
- Ask for an initial requirements document (shorter the better)
- The Goals
- Non-functional (measurable)
- Understand what exists today and how it is done
- Ask for any Domain Analysis which has been done
- Discussion with different domain experts and continuing to consult and inform them
- Whiteboarding (Diagrams)
- Whiteboarding with Post-Its to decompose the problem
- How to test
Styles which come to mind when I code
- The Something does something approach
- Simply start with a test called something with a test called does something and start implementing everything inside the file (language dependant)
- If there is one or more existing systems involved start using tests as questions to understand, record and revisit how you can work with the external systems and let this inform your development.
- This is a good idea also for informing how to create a test harness for the external systems
- Extract and organise during the refactor stage of Red, Green, Refactor
- Hexaganol Architecture AKA Ports and Adapters AKA Orthoganol Architecture AKA Onion Architecture (makes you cry)
- More testable code
- Separation of concerns
- Start developing with in-process implementations focusing on behaviour instead of implementation
- Separate services from APIs (e.g. A single service with a REST+JSON port and adapter, SOAP+XML port and adatper, BSON port and adapter)
- S.O.L.I.D Principles
- Principle of least information (not sure who coined this but basically referring to avoiding getters and setters and correctly modelling the behaviour and interactions)
- Tell don't ask
- Don't start with dependency injection
- Do follow Dependency Inversion
- Do follow Inversion of Control
- Extreme Programming
- Treat cross cutting concerns in a cross cutting manner
- When creating service processes separate the host from the service e.g. HTTP Host, TCP CLI Host, AMQP Host etc...
- This is an extra separation from the point above regarding API e.g. the HTTP Host could support different content-types, same with the other hosts.
Try the following
python > import this
Working with legacy or external systems
- Test Harness
- Decouple integrations
- Understand capabilties and limits (any SLAs, restrictions known or available?)
- Can test instances of the legacy or external systems be provisioned?
- Michael Feathers "Working with legacy code" <-- Spoiler (good definition for legacy)
- Two keyboards (3+ mob)
- Ping pong (Bob tests, Carol Implements, Bob Refactors, Carol Tests, Bob Implements, Carol Refactors)
- Pomodoro (cult thing if you buy the merchandise but good exercise for focus and spotting interuption patterns)
- Whiteboarding (Diagrams)
- CRC (classes, responsibilities and collaborators)
The following diagram typess are the ones which I find myself using most often to describe at a high level the application and integration architetures. These find their way onto the white board and into high level architecture documentation.
- Block diagrams
- Logical diagrams
- UML Physical Deployment Diagrams
- UML Sequence Diagrams
- Network diagrams
- Mind Maps
One thing which I always try to do is not mix the Logical with the Physical. In a logical diagram show different message flows but not the transport on which they flow. In a physical deployment diagram, show communication flows including the protocol and ports which will be used. In a network diagram show the different VLANS which may setup including multi-honed instances, security zones, firewalls, switches etc...
I very rarely use Activity Diagrams or Class Diagrams. I find functional decomposition diagrams useful for understanding existing systems and the capabilities.
- DevEnv under git
- VIM FOR ME (I shouldn't and do not care what IDE you use)
- ... but if you use tmux, there is wemux and the like which allow for epic remote pairing!!
- Console based develpoment environment
- Virtual Development Environments (e.g. Vagrant + Virtual Box + Ansible)
- Continuous testing and linting (where applicable e.g. Legacy system taking 30 mins to test anything means looking to reduce the time it takes to test looking for efficiencies)
- IDE Integration (GIT, Style, Quality, Code Inspection and Completion)
- Dev Productivity Integration
- dotfiles IDE Integration (Outcome based configuration)
- SH|BASH|ZSH + (Node.js|Python|Ruby|Perl) (automate everything to avoid duplicate effort in anything)
- Architectral diagrams programatically where applicable
- graphviz and .dot
- markdown (mermaid cli)
DON'T STANDARDISE ON IDE STANDARDISE ON OUTCOME
Use quality and style rules
...jshintrc "maxcomplexity": 3, "maxdepth": 2, "maxparams": 4, "maxstatements": 25, ...
[**.js] e4x = true indent_style = space indent_size = 2
How do I define testing types?
Overall (you have heard this all before but it certainly works for me)
- BDD - build the correct thing
- TDD - build the thing correctly
Types of testing
- Unit (any number of code artefacts but which you control all aspects of)
- Integration (Implementations of Interfaces e.g. SqlMemberService, MongoAnalsysiService, LogstashLoggingAgent - testing their integration with the external system)
- Acceptance (largely synthetic happy path with failure overlaid)
- Synthentic testing (real scenarios executed under Bob the Ghost user against the live service)
- Performance (Understand the baselines including business, application, infrastructure, networking)
- Stress (Run performance tests whilst putting different systems involved under different states of failure)
- Load (Incrementally increase the load on the system until such time as the system starts to degrade)
- Longevity (8, 24, 36 hour soak tests)
- Impulse (Cold start with performance e.g. turn off caching, do not warm up indexes etc...)
- Fuzz (different types of inputs that are not expected e.g. dynamic language invoke with int not string, with static invoke with Int64 not 32 if the type is a generic int etc...)
- Mutation ( during a build phase not(!) truths, mathematical operator changes e.g. + to - etc... spot any interesting failures which include knock on effects)
- Relative Fast Non-Functional (e.g. run 30 second performance tests each build and compare the results against the last (n) runs to spot relative differences)
- Static code analysis (Security, Style, Quality, COmparison) Break the build again based on relative analysis
- Failure with Integration Points and Systems (Plug for Release It! my Michael Nygard)
- Never ending
Either Feature Toggles or a GIT Branching Model
- Release (Release, Verification, Rollback)
- Smoke Tests
- Zero down time
- Blue Green
- Canary Deployment
- Shadow Deployment
- Malware (XML Bombs, ZIP bombs, Content Inspection)
- Protective Monitoring
- Static Code Analysis
- Transport Security
- Message Based Security
- Post Quantum Cryptography
- Intrusion Detection
- Protective Monitoring
- Pen Testing (push left)
Non-Functional Side of the Story
Wikipedia has a great collection of non-functional types which when you look at them really provide food for thought for apsects of the system where you never realised you could add in and assert on a certain type of quality early!!
The Release It! book provides a really good explanation and real world examples of a common set non-functional aspects.
Some of the books which have really stuck in my mind
- Building Microservices by Sam Newman
- Infrastructure as Code by Kief Morris
- Release It! by Michael T. Nygard
- Growing Object-Oriented Software, Guided by Tests by Nat Pryce, Steve Freeman
- Working Effectively with Legacy Code by Michael Feathers
- Security Engineering: A Guide to Building Dependable Distributed Systems, Second Edition by Ross J. Anderson
- The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt...
- Effective Monitoring and Alerting by Slawek Ligus
- Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, Video Enhanced Edition by Jez Humble, David Farley