Skip to content

Commit 1d54d34

Browse files
authored
Merge pull request #45 from ioggstream/feat/editorial
Editorial fixes
2 parents 2d12866 + 529f421 commit 1d54d34

3 files changed

Lines changed: 361 additions & 222 deletions

File tree

USAGE.md

Lines changed: 360 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,360 @@
1+
---
2+
In this article the usage of DSOMM is explained and the dimensions and corresponding sub-dimensions.
3+
4+
# Pre-Requirements
5+
Before you start, there is kind of maturity level 0.
6+
7+
The pre-requirements are highly based (mostly copied) on [AppSecure NRW](https://github.com/AppSecure-nrw/security-belts/tree/master/white).
8+
9+
## Onboard Product Owner and other Manager
10+
11+
Software vulnerabilities might be exploited when shipped into production.
12+
13+
This results in risks for the organization.
14+
15+
The person responsible for judging "risks vs.
16+
revenue" on your product
17+
(e.g., Product Owner, manager) must be convinced that continuously improving
18+
security through Security Belts is the best way
19+
to minimize risk and build better products.
20+
Judging about security risks requires company specific understanding
21+
about security risk management.
22+
Ensure that the aforementioned roles have this knowledge
23+
and train them if this is not the case.
24+
- Identify the persons who are judging "risks vs.
25+
revenue".
26+
- Raise the awareness of these persons
27+
(e.g., show how easy it is to exploit software).
28+
- Convince these persons that security is a continuous effort
29+
and that Security Belts are a cost efficient solution.
30+
31+
### Benefits
32+
33+
- The Product Owner is aware that software can have security vulnerabilities.
34+
- Resources are allocated to improve in security -
35+
to avoid, detect and fix security vulnerabilities.
36+
- Management can perform well informed decision when
37+
judging "risks vs.
38+
revenue".
39+
- The Product Owner has transparency on how secure the product is.
40+
41+
## Get to Know Security Policies
42+
Identify the security policies of your organization and adhere to them.
43+
44+
45+
Share with the Security Champion Guild how you perform the required activities
46+
from the policies, so others can benefit from your experience.
47+
In addition, provide feedback to the policy owner.
48+
Whenever you find yourself not adhering to the policies,
49+
communicate this to the person responsible for judging "risks vs.
50+
revenue"
51+
on your product (e.g., your Product Owner, manager),
52+
so they are aware of being out of policy.
53+
54+
### Benefits
55+
56+
- Building and operating software securely is hard; utilizing standards
57+
(as described in the security policies) makes it at least a bit easier.
58+
- Basic security risks, which are covered by security policies, are handled.
59+
60+
## Continuously Improve your Security Belt Rank
61+
62+
Security is like a big pizza.
63+
You cannot eat it as a whole,
64+
but you can slice it and continuously eat small slices.
65+
To make this happen, ensure that the Product Owner continuously prioritizes
66+
the security belt activities for the next belt highly
67+
within the product backlog.
68+
Security belt activities make good slices because they are of reasonable
69+
size and have a defined output.
70+
Celebrate all your implemented security belt activities.
71+
72+
## Benefits
73+
74+
- The team has time to improve its software security.
75+
- The team's initially high motivation and momentum can be used.
76+
- The Product Owner has transparency of the investment
77+
and benefit of security belts.
78+
- The team is improving its software security.
79+
80+
## Review Security Belt Activities
81+
Let the Security Champion Guild review your implementations of security belt
82+
activities (or concepts of these implementations) as soon as possible.
83+
This helps to eradicate misunderstandings of security belt activities early.
84+
85+
### Benefits
86+
87+
- The quality of the implementation is increased.
88+
- Successes can be celebrated intermediately.
89+
- Early feedback before the belt assessment.
90+
91+
## Utilize Pairing when starting an activity
92+
When implementing a security belt activity, approach a peer
93+
from the Security Champion Guild to get you started.
94+
95+
## Benefits
96+
97+
- Knowledge how to implement security belt activities is spread,
98+
so everyone benefits of prior knowledge.
99+
- Starting to implement security belt activities with guidance is easier.
100+
- The team is improving its software security while avoiding previously
101+
made mistakes.
102+
103+
# Dimensions
104+
105+
In the following the dimensions and corresponding sub dimension are described.
106+
The descriptions are highly based (mostly copied)
107+
on the [OWASP Project Integration Project Writeup](https://github.com/OWASP/www-project-integration-standards/blob/master/writeups/owasp_in_sdlc/index.md).
108+
109+
## Implementation
110+
111+
The dimension Implementation covers topic of "traditional"
112+
hardening of software and infrastructure components.
113+
114+
There is an abundance of libraries and frameworks implementing
115+
secure defaults.
116+
For frontend development, [ReactJS](https://reactjs.org/) seems to be
117+
the latest favourite in the Javascript world.
118+
119+
On the database side, there are [ORM](https://sequelize.org/) libraries
120+
and [Query Builders](https://github.com/kayak/pypika) for most languages.
121+
122+
If you write in Java,
123+
the [ESAPI project](https://www.javadoc.io/doc/org.owasp.esapi/esapi/latest/index.html)
124+
offers several methods to securely implement features,
125+
ranging from Cryptography to input escaping and output encoding.
126+
127+
**Example low maturity scenario:**
128+
129+
The API was queryable by anyone and GraphQL introspection was enabled since
130+
all components were left in debug configuration.
131+
132+
Sensitive API paths were not whitelisted.
133+
The team found that the application was attacked when the server showed very
134+
high CPU load.
135+
The response was to bring the system down, very little information about
136+
the attack was found apart from the fact that someone
137+
was mining cryptocurrencies on the server.
138+
139+
140+
**Example Low Maturity Scenario:**
141+
142+
The team attempted to build the requested features using vanilla NodeJS,
143+
connectivity to backend systems is validated by firing an internal request
144+
to `/healthcheck?remoteHost=<xx.xx.xx>` which attempts to run a ping
145+
command against the IP specified.
146+
All secrets are hard coded.
147+
The team uses off the shelf GraphQL libraries but versions
148+
are not checked using [NPM Audit](https://docs.npmjs.com/cli/audit).
149+
Development is performed by pushing to master which triggers a webhook that
150+
uses FTP to copy latest master to the development server which will become production once development is finished.
151+
152+
**Example High Maturity Scenario:**
153+
154+
Team members have access to comprehensive documentation and a library of code snippets they can use to accelerate development.
155+
156+
Linters are bundled with pre-commit hooks and no code reaches master without peer review.
157+
158+
Pre-merge tests are executed before merging code into master.
159+
Tests run a comprehensive suite of tests covering unit tests, service acceptance tests,
160+
unit tests as well as regression tests.
161+
162+
Once a day a pipeline of specially configured static code analysis tools runs against
163+
the features merged that day, the results are triaged by a trained security team and fed to engineering.
164+
165+
There is a cronjob executing Dynamic Analysis tools against Staging
166+
with a similar process.
167+
168+
Pentests are conducted against features released on every release and also periodically against the whole software stack.
169+
170+
# Culture and Organization
171+
172+
Culture and Organization covers topics related to culture and organization like processes, education and the design phase.
173+
174+
Once requirements are gathered and analysis is performed, implementation specifics need to be defined.
175+
The outcome of this stage is usually a diagram outlining data flows and a general system architecture.
176+
This presents an opportunity for both threat modeling and attaching security considerations
177+
to every ticket and epic that is the outcome of this stage.
178+
179+
### Design
180+
181+
There is some great advice on threat modeling out there
182+
*e.g.* [this](https://arstechnica.com/information-technology/2017/07/how-i-learned-to-stop-worrying-mostly-and-love-my-threat-model/) article or [this](https://www.microsoft.com/en-us/securityengineering/sdl/threatmodeling) one.
183+
184+
A bite sized primer by Adam Shostack himself can be found [here](https://adam.shostack.org/blog/2018/03/threat-modeling-panel-at-appsec-cali-2018/).
185+
186+
OWASP includes a short [article](https://wiki.owasp.org/index.php/Category:Threat_Modeling) on Threat Modeling along with a relevant [Cheatsheet](https://cheatsheetseries.owasp.org/cheatsheets/Threat_Modeling_Cheat_Sheet.html).
187+
Moreover, if you're following OWASP SAMM, it has a short section on [Threat Assessment](https://owaspsamm.org/model/design/threat-assessment/).
188+
189+
There's a few projects that can help with creating Threat Models at this stage, [PyTM](https://github.com/izar/pytm) is one, [ThreatSpec](https://github.com/threatspec/threatspec) is another.
190+
191+
> Note: _A threat model can be as simple as a data flow diagram with attack vectors on every flow and asset and equivalent remediations.
192+
An example can be found below._
193+
194+
![Threat Model](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/threat_model.png "Threat Model")
195+
196+
Last, if the organisation maps Features to Epics, the Security Knowledge Framework (SKF) can be used to facilitate this process by leveraging it's questionnaire function.
197+
198+
![SKF](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/skf_qs.png "SKF")
199+
200+
This practice has the side effect that it trains non-security specialists to think like attackers.
201+
202+
The outcomes of this stage should help lay the foundation of secure design and considerations.
203+
204+
**Example Low Maturity Scenario:**
205+
206+
Following vague feature requirements the design includes caching data to a local unencrypted database with a hardcoded password.
207+
208+
Remote data store access secrets are hardcoded in the configuration files.
209+
All communication between backend systems is plaintext.
210+
211+
Frontend serves data over GraphQL as a thin layer between caching system and end user.
212+
213+
GraphQL queries are dynamically translated to SQL, Elasticsearch and NoSQL queries.
214+
Access to data is protected with basic auth set to _1234:1234_ for development purposes.
215+
216+
**Example High Maturity Scenario:**
217+
218+
Based on a detailed threat model defined and updated through code, the team decides the following:
219+
220+
* Local encrypted caches need to expire and auto-purged.
221+
* Communication channels encrypted and authenticated.
222+
* All secrets persisted in shared secrets store.
223+
* Frontend designed with permissions model integration.
224+
* Permissions matrix defined.
225+
* Input is escaped output is encoded appropriately using well established libraries.
226+
227+
### Education and Guidence
228+
229+
Metrics won't necessarily improve without training engineering teams and somehow building a security-minded culture.
230+
Security training is a long and complicated discussion.
231+
There is a variety of approaches out there, on the testing-only end of the spectrum there is fully black box virtual machines such as [DVWA](http://www.dvwa.co.uk/), [Metasploitable series](https://metasploit.help.rapid7.com/docs/metasploitable-2) and the [VulnHub](https://www.vulnhub.com/) project.
232+
233+
The code & remediation end of the spectrum isn't as well-developed,
234+
mainly due to the complexity involved in building and distributing such material.
235+
However, there are some respectable solutions, [Remediate The Flag](https://www.remediatetheflag.com/)
236+
can be used to setup a code based challenge.
237+
238+
![Remediate the Flag](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/rtf.png "Remediate the Flag")
239+
240+
However, if questionnaires are the preferred medium, or if the organisation
241+
is looking for self-service testing, [Secure Coding Dojo](https://github.com/trendmicro/SecureCodingDojo) is an interesting solution.
242+
243+
More on the self-service side, the Security Knowledge Framework has released
244+
several [Labs](https://owasp-skf.gitbook.io/asvs-write-ups/) that each
245+
showcase one vulnerability and provides information on how to exploit it.
246+
247+
However, to our knowledge, the most flexible project out there is probably
248+
the [Juice Shop](https://github.com/bkimminich/juice-shop), deployed
249+
on Heroku with one click, it offers both CTF functionality and a self-service
250+
standalone application that comes with solution detection
251+
and a comprehensive progress-board.
252+
253+
![Juice Shop](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/juiceshop.png "Juice Shop")
254+
255+
### Process
256+
**Example High Maturity Scenario:**
257+
258+
Business continuity and Security teams run incident management drills
259+
periodically to refresh incident playbook knowledge.
260+
261+
# Test and Verification
262+
At any maturity level, linters can be introduced to ensure that consistent
263+
code is being added.
264+
For most linters, there are IDE integrations providing software engineers
265+
with the ability to validate code correctness during development time.
266+
Several linters also include security specific rules.
267+
This allows for basic security checks before the code is even committed.
268+
For example, if you write in Typescript, you can use
269+
[tslint](https://github.com/palantir/tslint) along
270+
with [tslint-config-security](https://www.npmjs.com/package/tslint-config-security)
271+
to easily and quickly perform basic checks.
272+
273+
However, linters cannot detect vulnerabilities in third party libraries,
274+
and as software supply chain attacks spread, this consideration becomes more important.
275+
To track third party library usage and audit their security you can use [Dependency Check/Track](https://dependencytrack.org/).
276+
277+
![SKF Code](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/skf_code.png "SKF Code")
278+
279+
This stage can be used to validate software correctness and it's results as a
280+
metric for the security related decisions of the previous stages.
281+
At this stage both automated and manual testing can be performed.
282+
SAMM again offers 3 maturity levels across Architecture Reviews, Requirements testing, and Security Testing.
283+
Instructions can be found [here](https://owaspsamm.org/model/verification/) and a screenshot is listed below.
284+
285+
![SAMM Testing](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/samm_testing.png "SAMM Testing")
286+
287+
Testing can be performed several ways and it highly depends on the nature
288+
of the software, the organisation's cadence, and the regulatory requirements among other things.
289+
290+
If available, automation is a good idea as it allows detection of easy to find vulnerabilities without much human interaction.
291+
292+
If the application communicates using a web-based protocol, the [ZAP](https://github.com/zaproxy/zaproxy) project can be used to automate a great number of web related attacks and detection.
293+
ZAP can be orchestrated using its REST API and it can even automate multi-stage attacks by leveraging its Zest scripting support.
294+
295+
Vulnerabilities from ZAP and a wide variety of other tools can be imported and managed using a dedicated defect management platform such as [Defect Dojo](https://github.com/DefectDojo/django-DefectDojo)(screenshot below).
296+
297+
![Defect Dojo](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/defectdojo.png "Defect Dojo")
298+
299+
For manual testing the [Web](https://github.com/OWASP/wstg) and [Mobile](https://github.com/OWASP/owasp-mstg) Security Testing Guides can be used to achieve a base level of quality for human driven testing.
300+
301+
**Example Low Maturity Scenario:**
302+
303+
The business deployed the system to production without testing.
304+
Soon after, the client's routine pentests uncovered deep flaws with access to backend data and services.
305+
The remediation effort was significant.
306+
307+
**Example High Maturity Scenario:**
308+
309+
The application features received Dynamic Automated testing when each reached staging, a trained QA team validated business requirements that involved security checks.
310+
A security team performed an adequate pentest and gave a sign-off.
311+
312+
# Build and Deployment
313+
314+
Secure configuration standards can be enforced during the deployment using the [Open Policy Agent](https://www.openpolicyagent.org/).
315+
316+
![SAMM Release](https://github.com/OWASP/www-project-integration-standards/raw/master/writeups/owasp_in_sdlc/images/samm_release.png "SAMM Release")
317+
318+
**Example Low Maturity scenario:**
319+
320+
_please create a PR_
321+
322+
**Example High Maturity scenario:**
323+
324+
The CI/CD system, when migrating successful QA environments to production, applies appropriate configuration to all components.
325+
Configuration is tested periodically for drift.
326+
327+
Secrets live in-memory only and are persisted in a dedicated Secrets Storage solution such as Hashicorp Vault.
328+
329+
## Information Gathering
330+
331+
Concerning metrics, the community has been quite vocal on what to measure
332+
and how important it is.
333+
The OWASP CISO guide offers 3 broad categories of SDLC metrics[1] which can
334+
be used to measure effectiveness of security practices.
335+
Moreover, there is a number of presentations on what could be leveraged
336+
to improve a security programme, starting from Marcus' Ranum's [keynote](https://www.youtube.com/watch?v=yW7kSVwucSk)
337+
at Appsec California[1],
338+
Caroline Wong's similar [presentation](https://www.youtube.com/watch?v=dY8IuQ8rUd4)
339+
and [this presentation](https://www.youtube.com/watch?v=-XI2DL2Uulo) by J. Rose and R. Sulatycki.
340+
These among several writeups by private companies all offering their own version of what could be measured.
341+
342+
Projects such as the [ELK stack](https://www.elastic.co/elastic-stack), [Grafana](https://grafana.com/)
343+
and [Prometheus](https://prometheus.io/docs/introduction/overview/) can be used to aggregate
344+
logging and provide observability.
345+
346+
However, no matter the WAFs, Logging, and secure configuration enforced
347+
at this stage, incidents will occur eventually.
348+
Incident management is a complicated and high stress process.
349+
To prepare organisations for this, SAMM includes a section on [incident management](https://owaspsamm.org/model/operations/incident-management/) involving simple questions for stakeholders to answer so you can determine incident preparedness accurately.
350+
351+
**Example High Maturity scenario:**
352+
353+
Logging from all components gets aggregated in dashboards and alerts
354+
are raised based on several Thresholds and events.
355+
There are canary values and events fired against monitoring
356+
from time to time to validate it works.
357+
358+
# Credits
359+
360+
The

0 commit comments

Comments
 (0)