website: split (#8616)

* add package

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

* remove most of website

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

* keep relative api browser internal

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

* remove more stuff

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

* switch openapi renderer

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

* keep tests

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

* add placeholder index page to fix build

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

* fix build

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

* re-add blog

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

* fix default url

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

* fix build?

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

* actually fix build

Signed-off-by: Jens Langhammer <jens@goauthentik.io>

---------

Signed-off-by: Jens Langhammer <jens@goauthentik.io>
This commit is contained in:
Jens L
2024-02-28 00:59:04 +01:00
committed by GitHub
parent d29c3abc7d
commit d7ed1a5d30
128 changed files with 1084 additions and 8233 deletions

View File

@ -48,7 +48,6 @@ jobs:
matrix:
job:
- build
- build-docs-only
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4

View File

@ -14,9 +14,10 @@ RUN --mount=type=bind,target=/work/website/package.json,src=./website/package.js
COPY ./website /work/website/
COPY ./blueprints /work/blueprints/
COPY ./schema.yml /work/
COPY ./SECURITY.md /work/
RUN npm run build-docs-only
RUN npm run build
# Stage 2: Build webui
FROM --platform=${BUILDPLATFORM} docker.io/node:21 as web-builder
@ -149,7 +150,7 @@ COPY --from=go-builder /go/authentik /bin/authentik
COPY --from=python-deps /ak-root/venv /ak-root/venv
COPY --from=web-builder /work/web/dist/ /web/dist/
COPY --from=web-builder /work/web/authentik/ /web/authentik/
COPY --from=website-builder /work/website/help/ /website/help/
COPY --from=website-builder /work/website/build/ /website/help/
COPY --from=geoip /usr/share/GeoIP /geoip
USER 1000

5
package.json Normal file
View File

@ -0,0 +1,5 @@
{
"name": "@goauthentik/authentik",
"version": "1.0.0",
"private": true
}

View File

@ -1,31 +0,0 @@
---
title: The next step for authentik
description: TL;DR authentik is a company now, and were hiring!
slug: 2022-11-02-the-next-step-for-authentik
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- announcement
hide_table_of_contents: false
---
TL;DR authentik is a company now, and were hiring!
<!--truncate-->
authentik has been primarily a hobby project for me. Ever since I started the project in 2018, it was mainly developed in my spare time. Over time, and as the project gathered more of a following and the community grew, more people started helping with feedback, suggestions, and also by contributing documentation, integration guides, bug fixes and new features.
During that time, there were quite a few requests from people that wanted professional support and consultation. Theres also been a lot of cool ideas shared from both the community and myself. However, I didnt have the time to work on them as I always had a fulltime job that authentik couldnt (even with all the very generous GitHub sponsors, thank you all very much!) fully replace.
Which is why Im very happy to announce the launch of Authentik Security, an open core company built around authentik. I will be leading the company as CTO, and we have incorporated as a [public benefit company](https://opencoreventures.notion.site/OCV-Public-Benefit-Company-OPBC-eccb31976fc6485e9e55ad786c062d35) so the open source project will always be maintained. This move will allow me to work on authentik full time and hire a team of engineers. Weve been graciously funded by Open Core Ventures (https://opencoreventures.com/), who have been a joy to work with.
As part of this change, authentik will be re-licensed to MIT, which should even make it easier to adopt in your environment. Thanks to our most active contributors [@iamernie](https://github.com/iamernie), [@tigattack](https://github.com/tigattack), [@ikogan](https://github.com/ikogan), and [@JosephKav](https://github.com/JosephKav) for supporting the license change.
Now I know this might sound scary to some of you. I can assure you that there will be nothing negative coming out of this for the open source version.
The current version of authentik will stay open source and continue to be developed. We will add business-focused features like auditing and compliance to the source-available enterprise version over time. No existing features will be removed from the open source version with the intention of adding them to the enterprise version. Features might still be deprecated, but they wont show up on the enterprise version. Features from the enterprise version will regularly be open-sourced. In fact, a lot of resources will go into the open source version, so everyone can benefit from them.
Overall, very exciting times ahead. We will hire full time developers, copy-writers, designers, etc, all to make authentik better for everyone. Thanks to all of the authentik community and everyone that uses it. Hopefully youll enjoy the upcoming changes!

Binary file not shown.

Before

Width:  |  Height:  |  Size: 220 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 96 KiB

View File

@ -1,156 +0,0 @@
---
title: "SaaS apps conceal being hacked, so self host"
description: More companies are realizing that SaaS isnt, and shouldnt be, the default.
slug: 2023-01-24-saas-should-not-be-the-default
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- blog
- sso
- self-hosting
- saas
- hack
hide_table_of_contents: false
---
“We made a mistake” so said authentication provider [Okta](https://support.okta.com/help/s/article/Frequently-Asked-Questions-Regarding-January-2022-Compromise?language=en_US) on March 25, 2022 two months after an attack on one of Oktas vendors (Sitel, a contact center) in January. During Oktas initial investigation, the company didnt warn its customers about the attack nor about its potential damage.
“At that time,” Okta admitted later, “We didnt recognize that there was a risk to Okta and our customers.”
On March 22, three days before the admission, the group responsible for the attack LAPSUS$ shared screenshots online that evidenced the success of their attack. As users, customers, and onlookers reacted, Okta co-founder and CEO Todd McKinnon [tweeted about the attack](https://twitter.com/toddmckinnon/status/1506184721922859010?s=20&t=o7e6RA25El2IEd7EMQD3Xg), claiming that the attack was “investigated and contained” but, more controversially, framing the attack as “an attempt.”
<!--truncate-->
Many disagreed with that framing considering, as the news progressed, that the attack had succeeded and had affected 2.5% of Okta customers ([about 375 companies](https://www.bleepingcomputer.com/news/security/okta-confirms-25-percent-customers-impacted-by-hack-in-january/)). Worse, LAPSUS$ itself disagreed, claiming they had “logged in to a superuser portal with the ability to reset the Password and MFA of ~95% of clients.”
Data breaches are not uncommon but in this case, the coverup became worse than the crime. In the days and weeks after, most criticism of Okta didnt focus on the attack itself but on the companys response. Okta had two months to talk about the attack before LAPSUS$ forced them to and its unclear whether Okta ever would have talked about it at all without the circulation of those screenshots.
Eventually, Okta admitted its faults. On March 23, David Bradbury, Chief Security Officer at Okta, [wrote that](https://www.okta.com/blog/2022/03/oktas-investigation-of-the-january-2022-compromise/): “I am greatly disappointed by the long period of time that transpired between our notification to Sitel and the issuance of the complete investigation report.”
The Okta case is one example in a line of many. Its a particularly galling case because Okta manages authentication for so many companies making it a frontline security product but the pattern itself is not rare.
A major consequence of the rise of SaaS software is a misalignment of incentives between SaaS vendors and customers. We dont have to put on tinfoil hats to realize that vendors have a strong incentive to ignore or even suppress bad news so as to safeguard their relationships with current and future customers.
As honest and as well-intentioned as a vendor might be, that incentive misalignment is still there. This tension exposes the leading edge of an emerging trend and potentially major shift: Companies are reconsidering the value of self-hosting their software so as to have greater control over security and cost.
### 5 incentives SaaS vendors have to be secretive about security
This is not a secret nor a conspiracy theory: SaaS vendors have a compelling array of incentives to hide security flaws in their services and suppress the publicity of successful data breaches.
The very model of delivering software as a service means that vendors are incentivized to maintain relationships with their customers so as to encourage them to maintain their subscriptions. That incentive leads to good things, such as prompt customer service and product iteration. But it can also lead to bad things, such as hiding mistakes and flaws.
Its hard, bordering on impossible, to claim that any given company suppressed news about a data breach. But we can infer its likely that it happens given three things:
- The SaaS industry is [massive and growing](https://www.grandviewresearch.com/industry-analysis/saas-market-report), meaning there are many companies out there that _could_ suffer a data breach and _could_ suppress news about it.
- The media industry is inherently limited and cant discover and report on every data breach.
- The number of data breaches has [consistently risen](https://www.statista.com/statistics/273550/data-breaches-recorded-in-the-united-states-by-number-of-breaches-and-records-exposed/) from 2005 to 2021.
Given these three dynamics, its likely some significant portion of vendors have tried, or at least hoped, for news about a data breach to not break headlines. Is it ethical? Likely not. But is it rewarding? If it all works out, yes. Lets look, then, at the five biggest incentives companies have to suppress data breach news.
#### 1. Fines
With the passing of the General Data Protection Regulation (GDPR) in Europe, along with a slew of other regulations, many of which are still emerging, fines have become a significant concern for companies.
GDPR fines are designed, in the [words of the EU](https://gdpr.eu/fines/), to “make non-compliance a costly mistake for both large and small businesses.”
The “less severe infringements” can cost companies up to €10 million (almost $11 million) or up to 2% of the companys annual revenue ”whichever amount is _higher_” [emphasis ours]. The “more serious infringements” can cost companies €20 million (about $21.5 million) or 4% of the companys annual revenue again, “whichever amount is higher.”
#### 2. Reputation
At first glance, the reputation cost of a data breach might seem minimal. Even headline-breaking data breaches dont always seem to impair companies.
You couldnt infer, for example, when the infamous Experian data breach occurred looking at its stock price alone.
![alt_text](./image2.png "image_tooltip")
(It happened in September of 2017 and a [class action lawsuit](https://www.ftc.gov/enforcement/refunds/equifax-data-breach-settlement) resulted in payments starting in December of 2022).
The problem with considering the potential of reputation damage is that its hard to predict. There are a few factors that make news coverage of a data breach more likely, such as whether a company targets average users or business users and whether a company stores obviously sensitive data or not, but predictability remains hard.
Your company neednt trend on Twitter to suffer reputation damage, however. According to [Impravata research](https://security.imprivata.com/rs/413-FZZ-310/images/IM_Report_Third-Party-Remote-Access-Security.pdf), 63% of companies dont do security evaluations on prospective vendors because they rely instead on the reputation of the vendors in question.
The incentive to suppress bad news and avoid a bad reputation also worsens with time. The same research shows that 55% of companies consider a “history of _frequent_ data breach incidents” [emphasis ours] to be a major indicator of risk. That means a company might be transparent about it first breach and gradually more secretive as it suffers more attacks.
#### 3. Legal issues
Beyond sheer fines, regional, national, and international governments can also levy lawsuits against companies and individuals. Joe Sullivan, for example, a former CTO at Uber, was convicted of [covering up a 2016 data breach](https://www.washingtonpost.com/technology/2022/10/05/uber-obstruction-sullivan-hacking/) in 2022.
Even if individuals arent jailed and the company itself survives a lawsuit just fine, the consequences can still be meaningful. The previously cited Imprivata research shows that 40% of companies consider legal actions against a vendor to be another risk factor.
#### 4. Professional reputation
Parallel to the previously mentioned risk of more general reputation damage is the risk of damage to a companys professional reputation. Even if a data breach doesnt make headlines, employees, investors, and partners in your industry might still take heed.
The risk here gets worse when you consider the implications of a data breach. Many people, perhaps not entirely fairly, might doubt whether a company runs a good operation if it suffers repeated data breaches. Consider a representative [Glassdoor review of Uber](http://www.glassdoor.com/Reviews/Employee-Review-Uber-RVW39883443.htm):
![alt_text](./image3.png "image_tooltip")
Companies can also start negative feedback loops wherein repeated security issues give them a reputation as having a bad security team, meaning good security engineers might avoid working for the company to avoid association with that reputation.
#### 5. Contract cancellation
Fines arent the only form of monetary loss. Many companies build security risks into their vendor contracts, making it easy to sever the relationship or recoup their losses after a breach.
The previously cited Imprivata research shows that 59% of companies demand contracts that obligate vendors to “adhere to security and privacy practices.” The same proportion of companies 59% dont do security evaluations because they rely on the consequences of the security agreements in the contract.
### Whats old is new again: Why self-hosted software is making a comeback
Depending on your age and experience in the industry, the prospect of self-hosted software returning can range from plausible to laughable. The instinct to doubt makes sense SaaS became the dominant model of software delivery for a variety of valid reasons.
When the SaaS model emerged, it was clear that, in general, SaaS products were easier to use and often more effective than their self-hosted counterparts. SaaS products, for example, are:
- Easy to purchase often requiring little more than an account and a credit card.
- Easy to run, install, and upgrade.
- Easy to maintain especially given companies can rely not only on the resources of the SaaS vendor but on the distributed infrastructure of the cloud vendor the SaaS vendor is using.
That said, there are also compelling reasons to use self-hosted products. For example, with self-hosted products, companies can:
- Know where their data is located.
- Customize the application to their unique workflows.
- Shift financing from opex to capex, which often results in net cost savings.
- Trust in shared alignment. If you own and self-host a product, youre incentivized, in a way even the best SaaS vendor isnt, to keep it secure.
Authentication, what we specialize in here at Authentik, is a great example. The industry standard used to be a self-hosted products most commonly Microsoft ADFS but they were notoriously unwieldy, which gave companies like Auth0 and Okta a chance to take over the market.
The technology industry is cyclical, however, and our bet is that by applying lessons learned from SaaS, vendors can offer self-hosted products that are as good or better than SaaS products. Customers can then have their cake and eat it too.
#### Technology is cyclical, not regressive or progressive
At first glance, the idea of companies shifting back to a self-hosted model seems silly didnt we learn our lessons the first time? Its easy to assume that the technology industry progresses in a linear, upward fashion and infer that anything from the past is necessarily worse.
And while that might be true for specific products (floppy discs arent coming back, Im afraid to say) business and technology models can and have returned from the dead.
Marianne Bellotti, author of the book _Kill It with Fire: Manage Aging Computer Systems (and Future Proof Modern Ones)_, raises the example of thick and thin clients. Decades ago, most companies ran applications on bulky mainframes but before the founding of AWS in 2006, companies had shifted toward running applications on personal computers. But as the cloud grew, the mainframe model returned with companies “time-sharing,” in Bellottis words, on public clouds in much the same way they did on mainframes.
“Technology doesnt advance in a straight line,” argues Bellotti, “because a straight line is not actually efficient.” Instead, she writes, “Technology advances not by building on what came before, but by pivoting from it.” And there are significant, growing, multiplying reasons to not only reconsider self-hosting but reconsider private data centers and on-premises infrastructure.
#### Early signs of an unwinding trend
Its hard to believe given years of discourse and “thought leadership” about the cloud but there are signs of change. And to be clear, the claim here is not that AWS will suddenly collapse and IT admins will need to rapidly re-learn server racking skills; the claim is that there are reasons to reconsider self-hosting and evidence that more and more companies will do that reconsideration.
Consider the recent decisions of three established companies: 37signals, Dropbox, and Retool.
On Twitter, 37signals and Basecamp co-founder DHH [summarized the results](https://twitter.com/dhh/status/1613508201953038337?s=20&t=QFmwWhR0YSCygvItPwtC8w) of a recent accounting 37signals did across its properties. 37signals spent, in total, $3,201,564.24 on cloud in 2022 and in a [subsequent tweet](https://twitter.com/dhh/status/1613558939760689153?s=20&t=QFmwWhR0YSCygvItPwtC8w), DHH compared that cost to purchasing “insanely powerful iron” from Dell that included “288 vCPU, 15 TB NVM, 1.3TB RAM for $1,287/month over 3 years.”
![alt_text](./image1.png "image_tooltip")
In a [full post](https://dev.37signals.com/our-cloud-spend-in-2022/) on the 37signals blog, senior site reliability engineer Fernando Álvarez provided more details, writing that “In 2023, we hope to dramatically cut that bill by moving a lot of services and dependencies out of the cloud and onto our own hardware.”
Years prior to this planned shift, in 2015, Dropbox decided to “[reverse migrate](https://www.datacenterknowledge.com/manage/dropbox-s-reverse-migration-cloud-own-data-centers-five-years)” from the cloud (AWS, in this case) to privately owned data centers. Before the end of the year, Dropbox relocated 90% of its customer data to an in-house network of data centers. At the time, the shift broke headlines because it seemed so unique.
Five years on, as Scott Fulton writes, Dropboxs decision “is starting to look more like a pioneer expedition.” Dropbox is able to save money and manage their resources more closely. Fulton argues theres no reason this choice “should only fit Dropbox.” Given the ability to effectively plan capacity, Fulton writes that many companies could also “avoid the breaking point of cloud-based service affordability.”
This trend also emerges on a customer-facing level. In 2021, low code platform Retool announced a [self-hosted plan](https://twitter.com/retool/status/1404835350250344449?s=20&t=VMzl65BkICVb3v2HxIXsEw), enabling customers to host Retool inside their infrastructure. Self-hosting, again, is not new, nor is the presence of customers requesting a self-hosted option. The difference here is that Retool, a relatively new company, founded in 2017 and growing fast, found reason to prioritize building a self-hosted option. Retool even [says that](https://twitter.com/retool/status/1404835454948495360?s=20&t=VMzl65BkICVb3v2HxIXsEw) “Self-hosting has been a top request from our self-serve customers.”
Retool cited a couple of [key use cases](https://twitter.com/retool/status/1404835751833989125?s=20&t=VMzl65BkICVb3v2HxIXsEw), including companies working within a regulated industry and companies hosting sensitive data. Retool also made it clear, though, that self-hosting is typically burdensome and offering this plan required the company to modernize the deployment process and make deployment easy by integrating Docker and Kubernetes.
### SaaS should not be the default
David Bradbury, Oktas Chief Security Officer, concludes his [post](https://www.okta.com/blog/2022/03/oktas-investigation-of-the-january-2022-compromise/) explaining the companys investigation of the LAPSUS$ incident and their response to it in a familiar way: “As with all security incidents, there are many opportunities for us to improve our processes and our communications. Im confident that we are moving in the right direction and this incident will only serve to strengthen our commitment to security.”
You dont have to impugn Oktas commitment or accuse them of suppressing news about this breach to see the problem. SaaS companies, due to the very structure of their business and delivery models, cant be as aligned with your companys needs as you are. SaaS companies will always, at best, be “moving in the right direction,” whereas your company, if it self-hosts its software, wont have to worry about misaligned incentives.
There might be a paradigm shift in how the technology industry hosts its workloads and delivers its software. There might not be. Either way, more companies are realizing that SaaS isnt, and shouldnt be, the default.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

View File

@ -1,68 +0,0 @@
---
title: "Becoming OpenID certified: Why standards matter"
description: We all know standards matter, without them we wouldn't have the internet, we wouldn't have computers, and we wouldn't even have electricity. But standards are complex. They need to define edge cases, they need to be explicit but also allow room for implementations to advance and new features to be created. Today we'll dive into the OpenID Connect standard, why it can be challenging to implement and also what makes it, in some ways, easier than other standards.
slug: 2023-03-07-becoming-openid-certified-why-standards-matter
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- blog
- sso
- self-hosting
- saas
- openid
- oidc
- certification
- testing
hide_table_of_contents: false
---
We all know standards matter, without them we wouldn't have the internet, we wouldn't have computers, and we wouldn't even have electricity. But standards are complex. They need to define edge cases, they need to be explicit but also allow room for implementations to advance and new features to be created. Today we'll dive into the OpenID Connect standard, why it can be challenging to implement and also what makes it, in some ways, easier than other standards.
<!--truncate-->
### OpenID Connect
OpenID Connect (from here on "OIDC") is a standard that builds on top of OAuth 2.0, an existing standard for access and authorization. OIDC adds a standard for user identity on top of OAuth, which allows RPs (Relying Parties, more commonly referred to as "Clients") to verify the identity of users through tokens issued by an IDP (identity providers). Before OIDC was as broadly used as it is now, most identity providers relied on a custom method for identity verification, which made it much harder to implement broad support in clients. For example, a client would have to implement custom logic for logging in with Google, GitHub, Microsoft, etc. This also made it much harder for new identity providers (hey that's us!) to work with existing clients, as identity providers would have to basically emulate the specific implementation by one of the aforementioned providers.
Since its introduction in [2014](<https://en.wikipedia.org/wiki/OpenID#OpenID_Connect_(OIDC)>), OIDC was broadly adopted in the late 2010s, and now most providers support it. Mind you, there's still a large list of providers that rely on pure OAuth with a custom identity layer on top, like Facebook, Twitter, Apple, but hopefully they'll migrate to OIDC eventually.
The broad adoption on both sides has also increased the diversity and fragmentation, and indirectly also the diversion away from the standard.
### Standards and testing
One aspect where OIDC is very different to other standards is that they have a certification program, where applications can run through a (quite large) set of tests specified by the [foundation](https://openid.net/foundation/) itself, called [Conformance Tests](https://openid.net/certification/). This allows library authors for both clients and identity providers to ensure that their implementation matches the OIDC standard, and behaves as expected. These tests are very thorough (the basic identity provider test suite contains 96 tests), which ensures that edge cases, uncommonly used features, and both positive and negative flows are tested.
These tests are all very well-defined, they explain exactly what is tested, what the expected behavior is, and what (if any) possible issues are.
For example, here you can see the overview of a test that's failed: ![Failed test overview](./failed-overview.png).
And here you see the exact request and the reason why it failed: ![Failed test detail](./failed-detail.png)
The exact standard definition is linked, which makes it very easy to dig in further and figure out what's supposed to happen.
For authentik, we use these tests to ensure that we adhere to the standard as much as possible to increase compatibility as much as possible. We've got all tests passing on the current development builds (which actually helped us a couple of bugs), and we'll be certifying the identity provider portion of authentik very soon.
![Successful tests](./summary-green.png)
### Deviation of standards
Unfortunately, it also quite common for clients to deviate from these standards, often requiring special behavior that is customized to be specific to one or more implementations. One example that we had this issue with in the past is VMware's vCenter.
VMWare introduced "Identity federation" in [Version 7](https://blogs.vmware.com/vsphere/2020/04/vsphere-7-new-generation-vsphere.html), but explicitly only supported Microsoft's ADFS identity provider. However, under the hood vCenter was itself actually using OIDC, which ADFS has supported for a while.
Of course for us, ADFS being the only supported solution wasn't great, and I was very curious to see if ADFS was _actually_ required. With the OIDC implementation that authentik had at the time, we didn't get very far. Logins would fail with a very cryptic error message, and since this was unsupported territory, there wasn't much we could find to help us figure out what was wrong.
I'll spare you the details, but after a lot of digging through logs and figuring out what vCenter actually expects and attempts to do, I found it expects the `access_token` to be an encoded [JWT](https://jwt.io/) (JSON Web Token). Stay tuned for our upcoming blog post about how JWT took the identity world by storm. Expecting an encoded JWT is not part of the [OpenID standard](https://openid.net/specs/openid-connect-core-1_0.html), so it somewhat made sense that they only advertise ADFS compatibility. However as we were finding out, vCenter was not the only applications that had this requirement. Researching further, it seemed like this had become sort of a "quasi-standard", as many identity providers were behaving this way.
In the end we decided to follow suit with authentik (mostly for the sake of compatibility, but also since it can make sense), and vCenter logins via authentik are now [fully supported](https://goauthentik.io/integrations/services/vmware-vcenter/) (at least from our side).
### Standards in authentik
Quickly touching on standards more generally in authentik; we aim to make authentik as standards-compliant as possible while retaining its feature set. For example, for SAML sources/providers, all generated responses are tested against the official SAML XML schema. The same is done for the newly added SCIM integration, where everything is equally validated.
### Certification
Standards are great, but without a central governing body that actually verifies and certifies the standards, that's only half the story. With OIDC and the OpenID Connect Foundation, the standards can be enforced, validated, and built on by a group of independent people. Having such a certification makes it a lot easier for people potentially interested in authentik (and software in general) to see that the application adheres to the standard and will work with other existing pieces of software. This alone is a very big part of why we're working on getting OIDC certified.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

View File

@ -1,182 +0,0 @@
---
title: "authentik on Django: 500% slower to run but 200% faster to build"
description: Why the speed of the tools you use often doesn't matter
slug: 2023-03-16-authentik-on-django-500-slower-to-run-but-200-faster-to-build
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- blog
- sso
- python
- django
- speed
- performance
hide_table_of_contents: false
---
# authentik on Django: 500% slower to run but 200% faster to build
I started [authentik](https://github.com/goauthentik/authentik) in 2018 as an open source hobby project but in 2022, with help from Open Core Ventures, I started [Authentik Security](../2022-11-02-the-next-step-for-authentik/item.md), an open core company built around the authentik project.
Building a new startup is, unsurprisingly, quite different from building and maintaining an open source project. With the arrival of funding and the requirement to build a business that could sustain itself now and scale as the company evolved, I had to confront some of the technical choices I made when building authentik in particular, the choice to build authentik using Python and Django.
The primary reason behind choosing these languages was simple: I knew them well and could write code fast. In retrospect, we know now there was a tradeoff. I was able to code faster but the language itself would eventually impose speed limitations. Python isnt the slowest language out there but when compared to Node.js and other compiled languages like Go, its speed can seem like a big problem. And Django on top of Python makes it even slower.
And yet, I stand by the decision and as the company has evolved, I think it was a good one. In this post, Ill explain why this decision was a net positive, the benefits and costs of choosing these languages, and the lessons we learned along the way.
<!--truncate-->
## Why we chose Python and Django
We chose Python and Django from a purely pragmatic perspective. Python and Django enabled us to build. Performance is important; architecture is important; sustainability and scalability are important. But in a very real sense, you dont get the privilege of facing those challenges until you build.
The rest of your worries, as important as they may be, cant even be worried about until the business is built. As the business has evolved, weve found too that using these languages supports our primary differentiating feature and makes it easier to hire developers who can help us build further.
### Easier leads to better
As I wrote above, I came to this project knowing the most about Python and Django and feeling the most comfortable using those languages to build a robust product.
And while there is a known sacrifice in performance using languages that tend to be slower, the kind of speed I prioritized was the speed of iteration. By using a language I knew better, I was not only able to build the first versions of authentik faster; I was able to debut them, get feedback, and iterate faster too.
And it wasnt just about comfort. Python was also a good choice because it has a wealth of libraries and components that make the language even easier to build with. While other languages would have required either coding by hand or gluing libraries together, much of our work was relatively out of the box.
### Differentiating features need the most support
The authentication market is not new. The original version of Microsoft ADFS came out back in 2003 with Windows Server. Okta was founded in 2009 and went public in 2017. Auth0 was founded in 2013 and was acquired by Okta in 2021.
In a crowded market, its more important to be different than better. Okta employs more than 5,000 people and its unrealistic to think we can outrun them. Even if we were somehow able to teleport to feature-parity tomorrow, Okta will benefit from a “You cant be fired for buying IBM” dynamic. Why go with the upstart instead of the established product?
The answer is a differentiating feature, one that goes beyond being a gimmick and becomes compelling enough that at least a select group of people will choose you.
For authentik, our primary differentiator is our delivery model: Unlike other players in authentication, authentik is focused on being self-hosted. There are numerous benefits to this we also think the trend is [shifting toward self-hosted delivery](../2023-01-24-saas-should-not-be-the-default/item.md) overall, especially for security products but the relevant benefit here is customization.
authentik appeals the most to developers who want to self-host their authentication and customize it to suit their specific needs. Building authentik with Python and Django supports this because so many developers know Python. If we used a different language, even if it were better in terms of performance, it would likely be less accessible to as many developers.
As it is, when I tell prospects weve built authentik with Python and Django, theyre outright excited. They know from the get-go that they can customize authentik, which makes that differentiating feature all the more compelling.
### Hiring is an accelerant
From the outside in, the hardest part about building a startup is building the product. From the inside out, the hardest part is hiring. Shahed Khan, co-founder of Loom, [put it simply](https://twitter.com/_shahedk/status/1416041429432819716): “Hiring is the hardest part about startups.”
Luckily for us, my comfort with Python and Django isnt unique. Python, according to the [TIOBE Index](https://www.tiobe.com/tiobe-index/), is the most popular programming language in the world as of this writing and has been one of the most popular programming languages for years and years.
![./image1.png](./image1.png)
With the amount of Python developers out there, were much more likely to find a developer with specialized experience or interests than if we had chosen a less popular language. And while there are other great languages out there (but if its not C, Java, or C++, then its not even half as popular as Python), the fact that Python has been popular for years means our pool of potential hires includes Python developers with all levels of experience.
## Speed is an overrated benefit with diminishing returns
Speed is important that much goes without saying.
But the benefits of speed are relative. Larger companies save more money and resources with better speed than smaller companies. Products that need to be in front of users all day require better performance than products that operate in the background or infrequently.
This isnt to say speed isnt worth investing in but past a certain threshold, fiddling with performance wont be worth the cost of investment.
### Speed is gravy
DHH, the creator of Ruby on Rails and co-founder of Basecamp & HEY, wrote a post back in 2016 titled “[Ruby has been fast enough for 13 years](https://m.signalvnoise.com/ruby-has-been-fast-enough-for-13-years/).” According to DHH, speed is “gravy for most people.”
The tricky part is that speed is not only gravy but gravy for universally everyone. If you add, say, a new level of compliance to your product so HIPPA-compliant medical companies can use your tool, thats primarily good for those customers. No one else cares about HIPPA compliance. Speed is different. If you ask, everyone from the end-users to the executives in any company or industry will agree that better performance is, well, better.
Its easy, then, to think prioritizing speed will provide significant benefits but as DHH writes, a supposed lack of performance didnt stop the growth of Ruby on Rails and it didnt stop the growth of Python either. Revealed preference shows that while everyone probably cares about performance to some degree, almost everyone cares about your core and differentiating features more.
### Speed is even less impactful depending on customer base and product usage
Speed is valuable, yes, but its value is relative to numerous factors, including customer base, company size, and product usage. With those factors accounted for, speed might not even be a priority for you.
Consumers, for example, are fickle. When people talk about performance, they often bring up studies that [companies like Amazon did](https://www.contentkingapp.com/academy/page-speed-resources/faq/amazon-page-speed-study/) one showing that every 100 milliseconds in added page load time cost Amazon 1% in sales. I dont doubt those results.
At the scale of Amazon, even small performance boosts can create huge gains. When youre just starting out though, the same gain will be proportionately less effective if your customer base and the resources you dedicate to them is small.
And thats assuming your users even care about speed in the same way Amazon users do. But if youre working in B2B and your users are developers and administrators rather than consumers, youre not going to get those kinds of results. Unless speed is your differentiating feature, B2B users are using your product for another compelling reason and that reason is likely enough to keep them waiting (barring extreme performance issues or actual downtime).
More than likely, B2B users are also using your software on a reasonably powerful desktop machine connected to office or home Wi-Fi. Amazon wants consumers to be just as satisfied searching on the train as they searching at home. In most B2B contexts, that simply isnt a use case (or is at least rare).
A [Nielsen Norman Group study](https://www.nngroup.com/articles/response-times-3-important-limits/) provides a clarifying framework:
- If an application responds in 0.1 seconds, the user tends to feel the system is “instantaneous.”
- If an application responds in 1.0 seconds, the user will notice the delay but the users flow of thought will likely stay “uninterrupted.”
- If an application takes 10 seconds to respond, the user wont be able to stay focused on the task at hand and if the delay takes any longer, the user will likely look for other tasks to do while waiting.
This framework leads to these questions: Do your users need to feel your application is instantaneous? When theyre using your application, are they typically in a state of focus they dont want to break? No one will object to an experience that feels instantaneous but in terms of setting up initial priorities for a new startup, most B2B software likely doesnt apply to a use case where perfect performance is impactful.
There are a few counterexamples, such as Slack, but most B2B software isnt running in the background constantly, ready to be picked up on a whim. authentik provides a really important service authentication but users only have to call on it occasionally. Once youre in, youre in.
### Speed is often blamed for other problems
Especially given Pythons notorious performance limits, the language can be an easy target when you do encounter speed issues. But Python is more than likely not the source (or at least not the primary source) of performance problems.
After effective optimization, you can make applications built on Python much faster than you might think. In numerous cases, weve added new features to authentik and had customers complain about resulting performance issues leading us to optimize and end up with something even faster than before.
Sometimes, speed really is a problem but your language might not be the bottleneck. The language might take twenty milliseconds instead of five, for example, but if your database query still takes 300 milliseconds, then the language isnt the limiting factor. By the time you optimize everything down to the language, your users might be more than happy with the performance.
In a post on A List Apart, W3C [demonstrates this idea well](https://alistapart.com/column/performance-matters/), showing that what you might assume about performance, given the reputation of a programming language, doesnt always bear out. They showed users five travel sites with similar designs and functionality. They then asked participants to predict, based on the graph below, which site was slowest.
![./image4.png](./image4.png)
“Many developers would assume,” they write, “that the fastest site would be the one with the least number of formatted lines of JavaScript, like Site #4, or the one with the least bytes downloaded, like Site #3.” In reality, they go on to show, “Site #5 is actually the fastest, even though it has more JavaScript and bytes downloaded.”
The primary point they wanted to make was that “Its not just about how to most efficiently execute JavaScript, its about how all of the browser subsystems can most effectively work together.” And the point we can extrapolate from this — beyond web apps and beyond JavaScript is that our assumptions about languages dont always hold once an application is actually built. This isnt to say the limits imposed by JavaScript or Python arent real but that the limits more likely come from elsewhere in the system.
“Elsewhere” can also include, maybe counterintuitively, user experience design. Performance, though we can measure it down to the millisecond, isnt as objective as the metrics imply.
One [study](https://blog.codinghorror.com/actual-performance-perceived-performance/) showed, for example, that the design of progress bars changed how people experienced speed. In the study, all progress bars took the same amount of time but users experienced smooth progress bars and progress bars that sped up toward the end as being faster.
![./image2.png](./image2.png)
The users experience of time is ultimately more important than the raw reality.
## Lessons learned
Weve learned a few lessons worth sharing and even though we might have done things differently if we knew all of this ahead of time, it was better to build and learn rather than delay and hypothesize.
As [Jeff Atwood has written](https://blog.codinghorror.com/version-1-sucks-but-ship-it-anyway/): “Version 1 Sucks, But Ship It Anyway.” That doesnt mean it will or should suck forever but, as he writes, whatever you create will inevitably be a “pale shadow of the shining, glorious monument to software engineering that you envisioned when you started.”
Instead, ship youll make mistakes and youll make tradeoffs you know have consequences but its only after shipping and iterating that youll know which mistakes really matter.
### Choose your drawback
Listen, I get it: I built authentik as an open source project first and if youre coming from that world especially, its tempting to want to prioritize technical quality above all else. If youre building a hobby project, thats fine, but as soon as you start building a business, you have to learn to make tradeoffs.
Once you embrace the need to make tradeoffs, you can decide which drawback youd most like to take. There will be consequences for each of your decisions so you need to not only choose the best available option but choose the option with either the fewest drawbacks or the option with the kinds of drawbacks you can compensate for.
With that in mind, we learned that Python was a good choice because even though its performance issues are real, we are equipped to deal with them. Weve been able to optimize around it and we think well be able to stay ahead of any real performance issues in the future too.
### Your tech stack is not static
Reputations tend to outlast realities and that pattern could happen to Python. Despite its recognized issues, Python is getting faster over time and speed is becoming less of an issue.
According to the Microsoft team [contributing to Python](https://devblogs.microsoft.com/python/python-311-faster-cpython-team/), Python 3.11, released in October 2022, saw “speedups of 10-60% in some areas of the language.” The more progress Python makes, the less often it will be a constraint and the less severe the limit will be when it is a constraint.
### Migration is always an option
I know migration isnt fun but when youre making tradeoffs, its worth keeping it in mind as a future option. If the choice is to build now and migrate later instead of never building at all, Im going to choose migration every time.
If we start running into truly significant performance issues and I emphasize _if_ we can always migrate critical parts of the application to a different language. This will of course be fully transparent to anyone running authentik, and Id like to think of it as a last-resort, if weve already done all the optimization possible.
### Architect your application well
Ive emphasized building throughout this post but that doesnt mean you should toss aside all concerns for scalability, speed, and the long-term performance of your application in general.
Amazing speed and fine-tuned optimization are negligible if your application cant scale. Even the fastest application will have its limit somewhere, but if you architect your application well, you can spread the load across many instances. This scalability is one of the core principles we built authentik on.
As Nelson Elhage, founding member of the Sorbet project at Stripe, [wrote](https://blog.nelhage.com/post/reflections-on-performance/), “If you want to build truly performant software, you need to at least keep performance in mind as you make early design and architectural decisions, lest you paint yourself into awkward corners later on.”
## Dont get nerd sniped
The hard overall lesson here is that, especially if youre a technical founder, you might have to resist the nerd within you.
The nerd within you is a huge asset when youre learning and building and perfecting but your nerd can be an obstacle if it encourages you to fixate on the wrong priority.
Nothing better evokes the danger of nerdiness than the physicist in [XKCD #356](https://xkcd.com/356/). In it, a bamboozled physicist ends up getting run over by a truck because theyre consumed by an interesting physics puzzle.
![./image3.png](./image3.png)
Speed is a complex challenge and an often fun, interesting one to figure out. But if youre building a business then you need to embrace tradeoffs instead of pursuing perfection. If you dont, you might get nerd sniped by your own interests and your business can suffer.
---
Let us know your thoughts about the balance of speed in build and run scenarios, and any other topic about authentik. We look forward to your comments and input.

View File

@ -1,75 +0,0 @@
---
title: Whats new with authentik - March 2023
slug: 2023-03-23-whats-new-with-authentik-march-2023
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
- name: Tana Berry
title: Sr. Technical Content Editor at Authentik Security Inc
url: https://github.com/tanberry
image_url: https://github.com/tanberry.png
tags:
- announcement
hide_table_of_contents: false
---
In a blog from last November 2022, titled “[Next steps for authentik](https://goauthentik.io/blog/2022-11-02-the-next-step-for-authentik)”, I wrote about the launch of [Authentik Security](https://goauthentik.io/), our open core company built around the open source project [authentik](https://github.com/goauthentik/authentik).
In this post, wed like to provide updates on our progress in building out Authentik Security the company, ramping up the feature set in our open source identity provider, and taking the first steps in developing and offering an enterprise-level feature set for the Cloud or self-hosting. We are enthusiastic about our path forward and our plans to take authentik from a project to a product.
<!--truncate-->
## Company Updates
Authentik Security is now officially almost 6 months old. The energy and vitality of our community-based authentik project continues, and now with a growing staff we can keep the open source product expanding with new releases and also work on launching and growing the company. Mixed into that balance is defining our feature sets for each of our offerings, and developing new functionality for companies that want to deploy authentik at the enterprise level, either self-hosted or on the Cloud as a SaaS offering by Authentik Security.
We have a lot of exciting work in front of us. Check out our [job posts](https://goauthentik.io/jobs/) and consider joining us!
## New features for authentik
Authentik Security aims to always be open and transparent, and our trust in our communitys awesomeness means that we realize you all are experts in the field, the ones working in the security and identity management industry every day (and night!), so we look forward to strengthening our collaboration and communication with you, our user.
We have a roadmap with several new features, and we want to hear your opinions on them. You can write us directly at hello@goauthentik.io or open a [GitHub issue](https://github.com/goauthentik/authentik/issues). For more targeted conversations, we will reach out to schedule calls with users who want to provide more in-depth collaboration and feedback.
### Coming up
Roadmapped features include:
- **RBAC**
- Currently theres only the option of users to be superusers or regular users, and superusers can edit everything, including all authentik objects. This goes against the security principle of [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege), and as such goes against our security-focused mantra. Role-based access control (RBAC) restricts CRUD rights on authentik objects based on a specific _role,_ providing even more fine-grained control.
- **UX improvements**
- Ease of use and clear, intuitive UIs is always one of our main goals, and were now focusing yet more on making the experience of using authentik even better. Less jumping around in the UI and more helpful context actions, suggestions, and recommendations.
- **Push-notification multifactor authentication** (Enterprise)
- authentik recognizes the important of MFA (Multifactor Authentication); in some cases, the old username/password combination alone is less-than-sufficient.
- **Desktop authentication** (Enterprise)
- robust cross-platform desktop authentication to secure access to all machines running in the environment.
- **AI-based risk assessment** (Enterprise)
- AI provides the ability to analyze massive amounts of data that are relevant to security and access, accelerating risk assessment.
### Were listening
As we grow the company and the feature sets, we are focused on building and maintaining a strong process for consistent communication and collaboration between our product team and our users. We want to hear what are the top features that you would like to see prioritized for our product and development teams. We look forward to conducting user interviews, requests for new features via GitHub Issues, and even surveys fueled by swag giveaways!
## Announcing the new Plans page
We are excited to announce that we will publish a new page on our product website, where we explain our product plans and the pricing for each offering. Its important to us that we get this right, so we will also implement a communication and input process to gather feedback on our pricing and all offering details.
One of the primary ways for us to hear your input will be right there on the new page. We have created a “waitlist” for each pricing plan, in order to gauge interest in the plans feature sets and to learn what feedback you have on the pricing in general. Please join the list and the conversation!
### About the plans
The following offerings are described in detail on the new page (coming soon!) in our website.
- Open Source:
Our forever-free offering, the open source authentik project, has been active for over 5 years, and now has the support of Authentik Security. For self-hosted environments, works using all major authentication protocols (OAuth2/OpenID Connect, SAML, LDAP, and proxy authentication), with an advanced, customizable policy engine, and community support.
- Enterprise Self-hosted:
Our Enterprise Self-hosted plan offers all of the features of open source authentik (and is still source-available), plus releases with long-term-support (LTS), an enterprise-level support plan, and additional features for larger organizations such as AI-based risk assessment and multifactor authentication (MFA) with push notification.
- Enterprise Cloud:
The Enterprise Cloud plan provides the convenience of our enterprise-level product as a SaaS offering, hosted and managed by Authentik Security. For many organizations, the benefits of decreased operational costs and universal data access (no VPN, servers, and network configuration required) make SaaS the best choice. With the cloud offering, the same enterprise-level support plan is included, and migrating to self-hosted is always an option.
Take a look at the new [Plans page](https://goauthentik.io/pricing/), and if youre interested in the upcoming feature sets and learning more about our Cloud or self-hosted offerings, join the wait list and lets start talking about what your company needs.
Thanks for reading, and being part of authentik.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

View File

@ -1,97 +0,0 @@
---
title: "JWT: A token that changed how we see identity"
slug: 2023-03-30-JWT-a-token-that-changed-how-we-see-identity
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
- name: Tana Berry
title: Sr. Technical Content Editor at Authentik Security Inc
url: https://github.com/tanberry
image_url: https://github.com/tanberry.png
tags:
- JWT
- token
- identity provider
- history
- JWS
- JWKS
- JWE
- JSON
- SSO
hide_table_of_contents: false
---
Even though JWTs (JSON Web Tokens, pronounced “jots”) have been around since [2010](https://en.wikipedia.org/wiki/JSON_Web_Token), its worth examining their more recent rise to become the dominant standard for managing authentication requests for application access.
When JWTs were first introduced, it was immediately clear that they were already an improvement on using a single string to represent the user information needed for authentication. The single string credential method was simple, but not as secure. There was no way to provide additional data or internal checks about the validity of the string or its issuer. With JWTs, there are expanded capabilities with more parts; there is a **header**, JSON-encoded **payloads** (called “claims”, which hold data about the user and about the token itself, such as an expiration date), and a **signature** (either a private key or a private/public key combination).
Lets look a bit more closely at what a JWT is, review a short history of JWT evolutions and adoption, then discuss how JWTs are used in authentik.
<!--truncate-->
## What is a JWT?
As briefly described above, a JWT is a security token that is to used securely and efficiently pass signed payloads containing the identity of authenticated users between servers and applications or other services. JWTs are a structured format token, made up of encoded JSON content.
There are typically three parts to a JWT, and in an encoded JWT each part is separated by a period (.). In the table below, the entire encoded JWT is shown in the left column, with three different colors to highlight the three different parts, which are shown decoded in the right column.
![](./table.png)
Lets take a closer look at each of the three parts, as shown above:
- The **Header** is a familiar concept in authentication, as an HTTP Authorization \*\*\*\*request header can be used in APIs to hold user credentials. For a JWT, the two most common declarations in the header are `alg` (defines the hashing algorithm used to sign the JWT) and `typ` to declare the token as a JWT.
- The **Payload** section of the JWT contains the _claims_, one of the most important and powerful part of JWTs. A claim is data, about the user and/or the token, in encoded JSON. There are seven official registered claims, plus many public claims that anyone can use. In addition, you can create private claims for use within you own environment. You can see a list of all registered and public `claims` on the official [website](https://www.iana.org/assignments/jwt/jwt.xhtml) maintained by IANA (Internet Assigned Numbers Authority).
In the example above, the following claims are included in the JWT: `sub` (the subject of the JWT), `name` (the name of person who requested/created it), and `iat` (the time at which the JWT was issued; this claim is used to determine the age of JWT).
**NOTE**: of course, in most cases you would not include sensitive information within the claims, because an encoded JWT can be easily decoded! However, there are certain use cases of including PII (such as user name) within a JWT, for use by internal processes within the application. For example, if application A uses authentik to login, the JWT should in most cases not be visible to the end user and should be treated as a password/credential/secret. If the JWT will be visible to the end user for some rare reason, then it should not include any PII.
- In the **Verified Signature** part, we see more information about how the JWT was signed, as well as the signature that was created by the server using a secret key. The electronic signature is simply a unique mathematical computation, and this signature is for the specific payload of the JWT. If any data in the JWT is modified, then the JWT must be re-signed. In the case of malfeasance, the attacker would have to know the original secret or private key in order to re-sign. If they didnt know the secret data, then the modified data is rendered useless because the server cannot validate the signature.
In the example above, the token is signed using a single secret key. This is known as _symmetric signing_, with the HMACSHA256 (HMAC + SHA256 checksum) algorithm. JWTs can also be signed with a public/private key pair (i.e using an RSA or ECDSA algorithm); this is known as _asymmetric signing_ because one single secret can not perform both operations (sign,validate / encrypt,decrypt).
There are two types of JWTs:
- A **JWS** is an electronically signed JWT, but the content of the payloads is not encrypted. We use JWS in authentik, primarily because we (along with most in the industry) dont see the overhead “cost” of encryption for users to be worth the benefits. Using a JWS also allows for developers to view data within the JWT (without having to do decryption). In authentik JWTs are, by default, symmetically signed, but you can select to use asymmetrically signed JWTs.
- A **JWE** has encrypted payloads, so you cannot use a decoder tool such as [jwt.io](https://jwt.io/) to view the contents.
There are many websites and videos to help you learn more about the structure of JWTs and their benefits. Next lets look at how they have evolved, and their rise in use.
## Evolution and industry adoption
JWTs were first [drafted as a concept in September 2010](https://jsonenc.info/jss/1.0/), and then [updated in 2015](https://www.rfc-editor.org/rfc/rfc7519). Along the way there have been some tweaks made, most notably strengthening how libraries handle a perfectly valid use of a `“alg”: “none”` setting in the Header.
Since 2015, JWTs have become one of the most common authentication methods. Given the stateless quality of JWTs, their improved security over single strings, and the fact that authorization and authentication data in a JWT token can be efficiently shared between multiple clients, servers, services, and microservices, their rapid adoption makes sense.
Consider how authentication and authorization are handled in a microservices setting. When a request is made on the client, the client first communicates with the authorization server and retrieves a JWT. This JWT contains user details and serves as the access token that is sent to microservices to gan access. All services within the environment can now validate and decode the token in order to determine the user who is requesting access.
![alt_text](./image1.png "image_tooltip")
This architectural workflow has proven to be effective in modern web applications that are often highly distributed. Indeed, JWTs are now the standard in most identity providers and cloud platforms, as well as many other enterprise systems and modern database platforms, such as Netflix, CockroachDB, MongoDB, VMWare, and the list goes on.
## Why use JWTs in authentik
Here at authentik we agree with the industrys use of JWTs as the best method for managing user access (authentication _and_ authorization, but thats a topic for a whole other blog!). In addition to building authentik to use industry standard best practices, we see only advantages to implementing JWTs.
JWTs are highly effective (and efficient) for enterprises; they are an improvement (more secure) than the old system of “pre-authentication”, with all application servers running behind a proxy on a private network, that replied only on the HTTP header to identify the user. With JWTs, the user info contained inside must first be authenticated, then access to all applications and services is authorized.
As discussed above and shown in the diagram, after the user info in the JWT is validated, it can be used to auto-approve access to all services within the environment, without the user needing to constantly supply credentials.
In authentik, this cross-services efficiency can be seen in a concrete example:
- Application **A** (running in Kubernetes) wants to access application **B** (secured behind authentik). App **A** takes the JWT that it gets from Kubernetes (which acts as an auth server), sends that JWT to authentik, and authentik verifies it against the signing key from Kubernetes. Then, based on which namespace or other criteria App **A** is running in, authentik can give or deny access to App **B** or any other applications that are using authentik - all without any passwords being entered.
In the above example with authentik, you can view the authentik user interface to get insight and metrics into where these automated login happens. To learn more about authentiks event logging for login activity, read our [documentation](/docs/events/#login) and take a look at the dashboards and metrics in the authentik user interface.
Another important factor about JWTs that we have not yet mentioned is the ability to define a expiration time for JWTs. Because JWTs cannot be revoked, its important to follow best practices and proactively set as short an expiration time as possible. In authentik, by default we set the expiration for access tokens at 5 minutes and refresh token at 30 days (while the refresh token is not a technically a JWT, it can be used to get new access tokens which are JWTs).
## What about you?
As always, we would be interested in hearing your thoughts on JWTs, in general and specifically within authentik. Were you already very familiar with JWTs, and their common adoption, and the advantages of the “cross-services” automated access provided in complex, distributed, microservices environments?
Let us know how our authentik users are implementing JWTs, and you are doing any type of customization, based on OpenID Connect. For example, the [machine-to-machine authentication feature](/docs/providers/oauth2/client_credentials) in authentik enables you to get a pre-configured JWT and use it in authentik to then get another JWT that is specifically for use within authentik.
Leave us a comment below, reach out to us at hello@authentik, and visit our [GitHub repository](https://github.com/goauthentik/authentik).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

View File

@ -1,94 +0,0 @@
---
title: "Supply chain attacks: what we can all do better"
slug: 2023-04-07-supply-chain-attacks-what-we-can-all-do-better
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- supply chain attack
- token
- identity provider
- history
- security
- cyberattack
- authentication
hide_table_of_contents: false
---
Supply chains, whether for automotive parts or microprocessors, are complex, as we all know from recent history. Modern software, with more components than ever and automated package management, is also complex, and this complexity provides a rich environment for supply chain attacks. Supply chain attacks inject malicious code into an application via the building blocks of the application (for example, dependencies) in order to compromise the app in order to infect multiple users.
<!--truncate-->
Using the inherent connections and dependencies of our typical complex workflows for upgrades, deployments, build systems, and other software maintenance workflows, supply chain attackers take advantage of the distributed networks and the myriad of third-party hardware, software, services to insert malware into core systems, and take control from there.
For example, NMS (Network Management Systems) are prime targets for attackers, who can use credentials for system _monitoring_ to laterally move into positions that allow _control_ of target systems. This tunneling into the complex system is a hallmark of supply chain attacks. With so many parts and pieces, plus frequently lax gatekeeping of access between components and layers, determined attackers can easily find an unwatched access point to enter, then steadily progress through the system to gain more and more control.
The tight integration and dependency of so many vendors means that dangers are sometimes overlooked; we have become too passive due to the ease of current automation. We run updates and build processes with a click of a few buttons, often without thinking about what exact set of libraries, tools, and apps are being used. What makes supply chain attacks even more difficult to foresee is the fact that often the suite of components come with signed certificates; the vendors dont even yet know that their own software has been compromised.
Supply chain attacks are on the rise, and in 2022 supply chain attacks surpassed the number of malware-based attacks [by 40%](https://www.helpnetsecurity.com/2023/01/26/data-compromises-2022/#:~:text=The%20number%20of%20data%20breaches%20resulting%20from%20supply%20chain%20attacks,%2Dbased%20attacks%20by%2040%25).
The 2020 attack on Solarwinds was one of the first major major supply chain attacks. In this case, the attackers took advantage of the very standard upgrade process used by SolarWinds and most major IT companies; SolarWinds unknowingly distributed software updates that included the hacked code to its customers, potentially as many as 33,000. The hacked code, deployed on each customers site, was then used to install even more malware.
> The 2021 supply chain attacks on [Malwarebytes](https://www.packetlabs.net/posts/malwarebytes-breach/) and [Mimecast](https://www.mimecast.com/blog/important-update-from-mimecast/) demonstrate that any company can be targeted, even security companies. [Immuniweb](https://www.immuniweb.com/blog/state-cybersecurity-dark-web-exposure.html) reported in 2020 that 97% of the world's leading cybersecurity companies had data leaks or other security incidents exposed on the dark web.
In response to the growing threat, a Presidential Executive Order [EO 14028](<https://www.gsa.gov/technology/technology-products-services/it-security/executive-order-14028-improving-the-nations-cybersecurity#:~:text=Executive%20Order%20(EO)%2014028%20%2D,and%20software%20supply%20chain%20integrity.>) was issued in early 2021, with the explicit purpose “to enhance cybersecurity and software supply chain integrity.” This order applies to both government and private companies, specifically any companies that provide software to the government. The order provides a framework of best practices to help strengthen protection against cyberattacks, and [new regulations](https://www.insidegovernmentcontracts.com/2023/02/january-2023-developments-under-president-bidens-cybersecurity-executive-order/) continue to be issued under this Order.
## Open source software is vulnerable too
> _90% of all applications contain open-source code_
Sonatype's [2020 State of the Software Supply Chain Report](https://www.sonatype.com/2020ssc) makes it clear that vulnerabilities in open source projects also impact major enterprise-level companies.
Consider the vulnerability in the Apache Log4j library used for logging user activity, where a remote code execution (RCE) vulnerability in the component Log4Shell is used by malicious hackers to access and control one or more remote servers. This incident highlights how bad actors can take advantage of our dependency on software as common as third-party libraries.
## The role of Identity Providers
Those of us in the business of SSO, IAM, and any type of identity provider software are uniquely positioned to help harden the industry against supply chain attacks, and mitigate the risks.
Authentication tools provide an obvious first line of defense against attacks, with the initial request for username and password. However, using SSO software also provides additional layers of authentication deep within your software ecosystem, with gatekeeping and secure handshakes using internal-only tokens between the many components. Read more about how authentik uses “[machine-to-machine](https://goauthentik.io/docs/providers/oauth2/client_credentials)” authentication, with internally generated [JWT tokens](https://goauthentik.io/blog/2023-03-30-JWT-a-token-that-changed-how-we-see-identity).
- **Session duration**: SSO software, by default, typically have short access sessions (minutes or hours, not days) because shorter sessions force frequent re-authentication. With authentik, access sessions expire after 60 minutes, by default. You can customize the duration for sessions in authentik and for the apps that use authentik as an SSO, as well as what type of credentials (fingerprint, code, etc) are required when a user logs back in.
- **Privileged accounts**: Additionally, IAMs allow you to manage privileged accounts (such as super-users) from a single interface. Frequent monitoring of activity, deleting old accounts, and minimizing the number of super-users greatly reduces the chances for bad actors to use dormant accounts and gain access.
- **Multifactor Authentication**: MFA is immensely important in mitigating cyber-attacks. In fact, Microsofts Group Program Manager for Identity Security and Protection Alex Weinert [stated](https://healthitsecurity.com/news/multi-factor-authentication-blocks-99.9-of-automated-cyberattacks) that “_Based on our studies, your account is more than 99.9 percent less likely to be compromised if you use MFA._”
- **RBAC**: using a role-based approach strengthens authentication because RBAC enforces the practice of “least privilege”; users are granted exactly the amount of privilege that they need to do their jobs, but no more.
- **Visibility and control**: An IAM provides visibility via event logs that provide details about login attempts, failed logins, password changes, and much more. Running reports and looking closely at events is important to gain early insight into malicious attempts. Furthermore, observing log activity allows for swift action when needed, such as revoking permissions for users and groups. For example, the authentik dashboard for Event Logs provides insight into all authentication actions and object manipulation, and the Directory of Users and Groups drills down at the user level, and allows for rapid account deactivation.
## Best practices for mitigation
In addition to always implementing an SSO solution, there are other steps that we in the industry can take. I use the word _mitigation_ because we will never be able to 100% prevent supply chain attacks. But we can all do more to help mitigate them.
### Evaluate your trust in vendors
Develop a close relationship with your software vendors. An interesting case to study is the CircleCI hack that began in 2020. CircleCI software contains a lot of stored secrets; after the hack, customers were advised to rotate the secrets. An alternative to storing secrets is to use [JWTs](https://goauthentik.io/docs/providers/oauth2/client_credentials#jwt-authentication), tokens that are signed and certified but do not include the actual secret.
Consider taking these steps in regards to your supply chain vendors:
- Request an SBOM (Software Bill of Materials) from your vendors. Know exactly what is included in packages and libraries that your team uses.
- Practice “_dependency vendoring_”, which is the process of having your security team review any required 3rd party tool or library and then, if it passes all the “checks”, host the software internally and have all developers download it from there. In general, limit access to the wide-open web.
- Read their security updates and release notes, understand exactly what the library or utility or update package is for, and how it interacts with your software and ask what security tests have been run.
- If you have a SaaS site hosted on the cloud, consider whether moving to on-premise is a more secure long-term plan. Deciding whether to self-host or not is a complex decision, but your sites security is definitely an important factor.
### Lean in, act in good faith, and trust the community
Two glaring examples of responses to supply chain attacks that could have been handled better:
- The recent [3CX hack](https://thehackernews.com/2023/03/3cx-desktop-app-targeted-in-supply.html) (in which malware was uploaded as part of what users thought was a regular upgrade) was initially dismissed by the CEO, who stated that the notifications from the community were false positives since the 3CX internal security checks showed all was fine. This is a prime example of choosing not to listen to experts in the community but instead rely only on internal checks.
- The attack on one of Oktas vendors in 2022 also displayed a lack of transparency and good faith, as I described in a recent [blog](/blog/2023-01-24-saas-should-not-be-the-default/item.md) about the security benefits of self-hosting. Okta co-founder and CEO Todd McKinnon initially  [tweeted about the attack](https://twitter.com/toddmckinnon/status/1506184721922859010?s=20&t=o7e6RA25El2IEd7EMQD3Xg) stating that the attack was “investigated and contained” and even framing the attack as “an attempt.”
Its important that everyone, and every company, in the industry works together collaboratively and urgently, and strives to be transparent and share information about known or suspected attacks and vulnerabilities in a timely manner.
In addition to acting in good faith and working to quickly share information, its equally important that supply chain attack victims also trust the community, “hear” the early warnings, and respect the collective knowledge and wisdom of the community.
### Consider building your own tools, in house
Sometimes building it yourself actually is the right answer. Dont quickly run to a 3rd party library or tool that you can implement yourself, and if you dont have the right resources, consider building up your security team as well as hiring and retaining enough senior-level developers to help with in-house tooling. Recognize that often, busy developers inherently trust anything that can be installed from a package server, so work to create a company culture of thinking through every download.
### Stay up-to-date with patches and security alerts
Whenever any vendor that you use within your environments ecosystem issues a security-related patch, make it a priority to research and apply the patch as quickly as possible. If your company or organization doesnt have a dedicated security team, you can assign this responsibility to another appropriate person; just make sure that you have a documented, up-to-date process for handling security threats.
## What else can we all do?
Here at authentik our engineers and product team are constantly thinking of ways to further harden and protect our users sites. Wed love to hear from you all about what youd like to see done by us, and by the industry as a whole. Leave us a comment below, reach out to us at hello@goauthentik.io, and visit our [GitHub repository](https://github.com/goauthentik/authentik).

View File

@ -1,113 +0,0 @@
---
title: "Monorepos are great! ... for now."
slug: 2023-04-22-monorepos-are-great
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- repos
- monorepos
- multi-repos
- Git
- code base
- startups
- authentication
hide_table_of_contents: false
---
None of us in the software industry are immune to the question:
> _How do we want to [re]define our repository structure?_
<!--truncate-->
From tiny, open source startups to behemoth companies like Microsoft, almost everyone in the software industry now uses “repos” to store, manage, and access their products code, docs, tooling. A repository is the heart of the company, containing virtually every artifact that comprises the end-product, and most importantly, keeping all of the constantly changing artifacts up-to-date and in-sync across the development landscape.
Understandably, where to keep the company jewels is a massively important question. If you have ever worked in a startup, you might have had the joy of helping to shape that decision… and if you have ever worked in a large company, you probably had the occasion to either praise or curse the decision made early on.
Structure matters. A lot. To everyone.
The best structure (and specifically the use of a monorepo or multiple repos) for any given company or team is subjective. There is no one, definitive answer. In this blog we will take a look at a bit of history about code repositories, some of the reasons for using a monorepo or multiple repos, and then delve into our thinking here at Authentik Security, for how we manage our own code base.
### History of repo-based code
In 2010, software development author Joel Spolsky [described](https://www.joelonsoftware.com/2010/03/17/distributed-version-control-is-here-to-stay-baby/) distributed version control systems (DVCS) as "possibly the biggest advance in software development technology in the [past] ten years”.
He wasnt wrong. He also, in that same blog, made a great point about DVCS innovation of tracking _changes_, rather than _versions_, which is very relative to the discussion around monorepos versus multi-repos. In a bit, well discuss how the frequency and amount of changes can impact your decision on how to architect your repos.
> 👟 **Sneakerware**: software that requires walking to another machine to manually insert a physical device containing software.
A brief history of code management takes us from the massive UNIVAC machines to mainframe to distributed client/server architectures, and then continue down the road of distributed systems to tools like Subversion and Mercurial and now todays über-distributed world of Git.
Its worth noting the relationship between distributed code bases and two other important software development trends that came along around the same time:
- the _Agile methodology_ provided a way for development teams to move quickly and efficiently, using the power of distributed code (and sophisticated tools like Git) to collaborate, build, and release ever faster.
- the use of _microservices_; theres a corollary (but also perhaps a non-analogy) between the repo structure and whether the software leans towards monolithic or is based on microservices (where smaller, loosely coupled services work together to create an application or platform). Its likely that if you use microservices, you probably have multiple repos, but this doesnt always have to be the case. Its a [perfectly fine solution](https://medium.com/taxfix/scaling-microservices-architecture-using-monorepo-domain-driven-design-ced48351a36d) to use a monorepo to store all of your microservices code, and thus reap the benefits of a monorepo.
As it always is with software, and humans, most would agree that our current state in the evolution of repos is working fairly well… but we always push for optimization and further innovation.
### Hello, Goldilocks: what is “just right”?
How to decide the optimum architecture for your repo[s] requires serious research, strategy, and long-term planning. Additionally, an honest self-analysis of your current environment: working styles, experience of your engineering team, the company culture around refactoring and maintenance, and what appetite there is for infrastructure support.
Considerations about the environment and type of code base include:
- **the number of projects** (and their relationships to each other)
- **activity level** (active development, refactoring, anything that results in commits and pull requests)
- **community contributions** (we want it to be easy to navigate the code base)
- **frequency of releases** (caution, possible slow build times ahead)
- **testing processes and frequency** (automated testing across _n_ repos)
- **amount of resources for infrastructure support** (as you scale…)
- **common dependency packages across projects** (update once, or… 6 times)
- **highly regulated types of software/data** (GDPR, PIP)
- **provider/deployment requirements** (i.e. typical 1:1 for Terraform module/repo)
Lets take a look at some of the benefits and some of the challenges of both mono- and multi-repo structures, and how they relate to to specific environments.
#### Monorepos
One of the best [definitions](https://monorepo.tools/) out there of a monorepo comes from Nrwl:
> _“A monorepo is a single repository containing multiple distinct projects, with well-defined relationships.”_
This definition helps us see why monorepo does not necessarily equal monolith. A well-structured monorepo still has discrete, encapsulated projects, with known and defined relationships, and is not a sprawling incoherent collection of code.
Its generally agreed that monorepos come with huge advantages. Most specifically, the ease of running build processes, tests, refactoring work, and any common-across-the-code-base tasks. Everything is there in one place, no need to run endless integration tests or cobble together complex build scripts to span multiple code bases. This can increase development, testing, and release-efficiency. Similarly, the use of a single, shared code base can speed up development and innovation.
Monorepos help avoid siloed engineering teams; everyone working in the same code base leads to increased cross-team awareness, collaboration, and learning from one another.
Now for the challenges presented with monorepos. Frankly, monorepos can be expensive when the size and number of projects start to scale up. Getting hundreds of merge conflicts is no ones idea of fun. Google, Meta, Microsoft, Uber, Airbnb, and Twitter all employ very large monorepos and they have also sall have spent tremendous amounts of time and money and resources to create massive infrastructure systems built specifically to support large code bases in monorepos. The sheer volume of testing, building, maintenance, and release workflows run against such code bases simply would not scale with your typical out-of-the box Git-based system.
For example, even back in 2015 Google had [45,000 commits](https://www.youtube.com/watch?v=W71BTkUbdqE) _per day_ to their monorepo. Not surprisingly, they built a specialized tool for handling that scale, called Blaze. The open source version of this tool is released as Bazel.
Similarly, in order to manage their large monorepo, Microsoft developed the [Virtual File System for Git](https://en.wikipedia.org/wiki/Virtual_File_System_for_Git). The VFS for Git system utilizes a virtual file system that downloads files to local storage only as they are needed.
Needless to say, most us dont have those types of resources.
#### Multi-repos
For companies with multiple projects with code bases that are not necessarily closely related, or with teams who work on very different areas of responsibility, a multi-repo structure is likely the obvious answer. However, even for companies with projects that are large and coupled, maintaining multiple, smaller repos can be the most simple. For example, if you have a large, enterprise-level web application as your primary product, but are also building mobile versions of the app, it makes sense to keep the mobile apps code bases in a separate repo. (Spoiler alert, thats us in mid-term future!)
Most obviously, using multi-repos reduces the need for investing in large infrastructure teams and systems. With separate code bases, theres no risk of facing hundreds of merge conflicts, or devoting hours to updates to dependencies, build processes, and deployments. Developers can move quickly, with minimal need to coordinate across teams or apply dependency updates that are not relevant to their code. Releases can happen faster, build times are down, and simplicity often translates into efficiency.
The challenges for multi-repos are basically the inverse of the advantages of a monorepo. With multiple repos, the effort to synch updates for libraries and other dependencies is increased, and if there is any coupling between projects or services, or shared code bases, then special workflows need to be implemented to manage the cross-repo communication. Furthermore, separate repos means that it is more difficult to enforce common practices and workflows, and for disparate teams to learn from one another.
### authentik advantages of a monorepo, for now
We are still a small company here at Authentik Security, moving fast to grow our products feature set and leap-frog our competitors, while staying true to our open source origins. We want to innovate, release, and at this stage, tilt towards rapid development. Our engineers and infrastructure team have the ability and desire to collaborate closely and learn from one another; this culture is important going forward, and using a monorepo works as a compelling incentive for team transparency and support across the projects.
The history of authentik provides some additional insight into our use of a monorepo; as the single maintainer for many years, a monorepo was simply easier for me to manage, and even as our wonderful community grew and contributions increased and sponsors appeared, the benefits of a monorepo remain.
So for us, at this stage, we benefit greatly from using a monorepo for the vast majority of our code base (and documentation). Using a monorepo means that as a team, we closely integrate our work: coding, documentation, test suites, refactoring efforts, common dependencies and tooling.
Of course, we have our eyes open and looking towards our future.
> Ironically, as our code base and feature set grows, we believe that we can best retain our focus on building and shipping new features… _by moving towards a multi-repo structure_.
The reasoning is that we do not want to be forced to focus on supporting the infrastructure needed to scale a super-large monorepo, nor on lengthy build times and complicated code management processes. So for now we will continue with our monorepo, but when the Docs or Product teams start whinging about long build times, or when the infrastructure team grows faster than the dev team, we will take another look at our repo structure!
### Where is your tilt?
Share with us your thoughts about how companies choose a repo structure, what works best when, and about our reasoning here at authentik for sticking with a monorepo for now. Leave us a comment below, reach out to us at hello@goauthentik.io, and visit our [GitHub repository](https://github.com/goauthentik/authentik).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 146 KiB

View File

@ -1,167 +0,0 @@
---
title: I gambled against React and lost (and I dont regret a thing)
slug: 2023-05-04-i-gambled-against-react-and-lost
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- blog
- react
- lit
- frontend
- web framework
- api
hide_table_of_contents: false
---
Back in 2018, I made a fateful decision: I chose to rebuild authentik using [Lit](https://lit.dev/) and not React.
We like to think that technical decisions are primarily, well, technical, but some of the biggest consequences of these decisions come from how a technology is adopted and used not the technology itself.
So it was with React.
In this post, Ill explain why I made this decision, how it did and didnt pay off, and why, ultimately, I dont regret it. The point isnt to sway you toward or away from React or to make an argument about web frameworks in general, but to encourage a discussion about the choices early-stage startups have to make.
<!--truncate-->
And when I say startups here, Im not talking about the scale-ups still calling themselves startups or the startup darlings with more dollars of venture capital than they can burn through, Im talking about people like me: developers working full-time on trying to build a company out of a popular project.
## React vs. Lit: A brief history and a one-sided victory
React was first released in 2013 and when I was building the initial open source project that would eventually become Authentik Security, React was a known option but not, as it arguably is now, a default.
In 2015, Netflix and Airbnb started using React. Reactive Native for iOS came out that same year and later, React Native for Android. By 2018, the year I made the decision, 28.3% of developers had started using React.
![image2.png](image2.png)
_(data from 2018)_
React was popular and rising but not an obvious choice not a choice that anyone would be surprised at for not choosing. Now, of course, its all different.
2022 StackOverflow [research](https://survey.stackoverflow.co/2022/) shows that 44.31% of developers now use React.
![image3.png](image3.png)
_(data from 2022)_
To get a sense of Lits popularity, in comparison, I had to narrow my search to JavaScript developers. Among JS developers, according to the [2022 State of JavaScript survey](https://stackdiary.com/front-end-frameworks/), Lit is used by only 6%.
![image1.png](image1.png)
Its not a sheer popularity difference:
- React has 232,063 packages on [NPM](https://www.npmjs.com/search?q=react) (Lit has [2451](https://www.npmjs.com/search?q=lit))
- React has over [1600 contributors](https://github.com/facebook/react) (Lit has [149](https://github.com/lit/lit)).
- React has over [14,000,000 users](https://github.com/facebook/react). (Lit has over [26,000](https://github.com/lit/lit)).
I wont say this success was impossible to predict but it also wasnt obvious.
These dynamics follow virtuous cycles: The more popular React became, the more developers wanted to try it; the more developers tried it, the more developers became proficient with it and the more they built with it; the more things built with React, the more companies demanded React skills and as demand rose, so did supply.
These numbers have real consequences in both technical and business contexts.
The more popular a framework, tool, or language is, the more we know about how to use it effectively and in which ways its most likely to fail.
Dan McKinley, VP of Engineering at Mozilla, [writes](https://mcfunley.com/choose-boring-technology) that “When choosing technology, you have both known unknowns and unknown unknowns.” He recommends choosing boring technologies (which React, arguably, has become) because “the capabilities of these things are well understood. But more importantly, their failure modes are well understood.”
And as a technology gets more popular, businesses that have adopted it benefit from a larger hiring pool. Just as its easier to find a Python developer than a COBOL developer, so is it easier to find a developer with React experience than it is to find one with Lit experience.
Ironically, I know this tradeoff well because I made essentially the same decision in reverse when I [picked Python](https://goauthentik.io/blog/2023-03-16-authentik-on-django-500-slower-to-run-but-200-faster-to-build) as Authentiks primary programming language. Choosing Python made hiring easier but Python also imposes some performance limitations that weve had to mitigate and work around.
## Startups are made of informed gambles
The sum total of all the words written across the Andreesen Horowitz, Sequoia, and Accel blogs are aimed at one goal: _Reducing the amount of startup success thats owed to successful gambling_.
This makes sense: The more that founding and building startups resembles a repeatable science, the safer venture capitalist funds are. But we dont always take this idea to its logical conclusion.
Startups have often succeeded, broadly, due to correct (in retrospect) gambles on technology and consumer trends, but startups make a multitude of other bets on people, on programming languages, on business models, and yes, on frameworks.
When we look backward, though, we tend to attribute too much credit to people who made gambles that turned out right. Im not trying to take credit away from anyone, but rather to reframe the kinds of decisions people are making, so that present and future founders arent confused: these choices are always partially bets.
### Why I chose Lit
The most important context to this decision is in the header: Authentik Security is now a team and a company, but in 2018, Authentik Security was just me and Authentik Security was authentik, the open source project.
I was building authentik as a hobby and all the work I did on it was done outside my full-time work. In that period, a few limitations rose to the top:
- I had few resources, both in terms of time and funds.
- I wasnt a front-end engineer.
- I wanted to be able to build features over time, at my own speed, rather than work on nothing else for three months before having anything to show.
At the time, hiring potential wasnt on my mind: Building and shipping was.
In the beginning, the project was entirely rendered server-side. With Django, I had some tooling available that made it easy to build forms for submitting and showing data as well as adding pagination, tables, and sorting.
But a few features emerged on the roadmap that soon made this setup feel less simple. I wanted to give users the ability to refresh data with a button press and complete searches without reloading the page, for example.
In late 2020, I started migrating to Lit, already knowing a lot of work was ahead. At that point it would often happen that when new features were introduced, they would be in the UI but not in the API, often due to forgetfulness. This turned things into a constant cat and mouse game to keep the API updated without knowing what endpoints were actually required.
Then, toward the end of 2022, with support from [Open Core Ventures](https://opencoreventures.com/), [authentik became Authentik Security](https://goauthentik.io/blog/2022-11-02-the-next-step-for-authentik). And once again, the product roadmap made some demands.
Inflexibility was an early flaw in authentik. Because we relied on server-side rendering, we also relied and made our users rely on browsers. But we had always wanted to go beyond that, to get to a point where there was a dynamic back-and-forth between the client and the server.
Flexibility, then, was the primary goal and thats the primary lens through which I evaluated the libraries and frameworks available at the time.
### 3 reasons I chose Lit over React
My choice of Lit came down to three factors, all of which emphasized that goal of flexibility.
- **Lit used a browser-native technology**. Traditional web development often involves a lot of clashing CSS files, making it difficult to use, say, a component from one library and a different component from another. Lit doesnt have this problem because it uses a [shadow DOM](https://developer.mozilla.org/en-US/docs/Web/Web_Components/Using_shadow_DOM) and the shadow DOM is a standard accepted by all browsers. We had flexibility on one end, with browsers, but on the other end, because these web components were browser-native, we could also use them across numerous frameworks (including React).
- **Lit offered better modularity.** With the shadow DOM functionality, I was able to isolate individual components and apply CSS to only those components. That meant greater flexibility because I could mix and match but it also meant more modularity. I didnt need a giant CSS file that could cause all kinds of effects when changed. Instead, I had modular bundles of isolated components and individual CSS files.
- **Lit didnt require me to rebuild**. This was less an advantage of Lit and more so a disadvantage of React. If I had chosen React, I would have had to rebuild authentik from the ground up, or use a lot of hacky workarounds to migrate at a slower pace. And once I had done that, I would have been limited, to a degree, to what React and the React community offered. There were things that were standard to browsers that I wouldnt have had access to or would have only had access to if they had explicit React support. There were bridges between standard components and React components, but they were built by community projects and I didnt want to get stuck relying on a project some [random person in Nebraska is thanklessly maintaining](https://xkcd.com/2347/).
### Lit and React is apples and oranges
So far, Ive skipped over what might be the most important way to compare Lit and React Lit is a set of utilities for building web components and React is a web framework.
As React, and web frameworks in general, have become prominent, this kind of comparison has gone understated. If you only compare frameworks against other frameworks, youre slipping into the assumption that you need a framework at all. As you compare pros and cons, you might be missing out on a whole world of pros and cons that a different approach provides.
The advantages I described above dont come out of Lit being _better_ than React, really, but come from Lit offering an approach that suited my needs better than React did (at least at the time).
With Lit, we could render the main page server side but the HTML that the server was rendering would contain web components with different features. That made for a very different migration path, one that allowed us to migrate part by part.
If we wanted, for example, to add a new component for a table, we could have the server return it without changing too much on the server side. But on the client side, we could add logic and pagination and other features over time. That just wasnt doable before we adopted Lit and Lit made that kind of migration path more doable than React would have.
## Tradeoffs all the way down
Scroll through HackerNews and youll find numerous, often tense or fiery, discussions of frameworks. Some people will be diligent about costs and benefits but many will be brand-loyal; arguing on the behalf of one framework and trashing all the others.
> “My fellow engineers, please stop asking Is this process good or bad? and start asking “Is it well-suited to my situation?” - [Jocelyn Goldfein](https://review.firstround.com/the-right-way-to-ship-software)), former engineering executive at Facebook and VMware
The lesson I learned from choosing between Lit and React and living with the consequences is that these decisions are tradeoffs all the way down.
The tricky part of this question is that identifying your situation is harder than it seems. Scale-ups and enterprises can rely on better pattern-matching than true, early startups can. Each startup, like [unhappy families](https://www.goodreads.com/quotes/7142-all-happy-families-are-alike-each-unhappy-family-is-unhappy), is unhappy in its own way.
When I defined our situation, three factors came to the top:
1. Shipping was our top priority.
2. A faster shift in rendering approaches was more important than slower hiring speeds.
3. Flexibility was more valuable than the help a framework would offer.
Remember, at the very beginning, it was me alone working on an open source hobby project when I had time outside of a full-time job. Especially considering the work I had already done and my lack of frontend engineering skills, it was clear that it was more important to build and ship than to restart.
Here, the tradeoff was that I was able to build authentik faster and deliver it to users sooner, but now Im missing out on the advantages that being involved with a thriving community like React would offer.
With Lit, we could migrate more easily and more practically, migrating more and more over time rather than either rebuilding or migrating all at once. Looking back, I dont know if I could have completed such a major shift in how we were rendering and if I could have, I dont know if it would even be done by now.
So Yes, finding and hiring developers with Lit experience is much harder than finding and hiring developers with React experience. Hiring developers is already notoriously hard and we made it harder. There is however the advantage that due to the smaller user-base of Lit, that in the rare case we do find a Lit developer, they tend to be very passionate about the same reasons why we chose Lit.
That said, theres a tradeoff to the tradeoff: React is so popular and so approachable that you have to be careful about finding really good developers. Lit, because its less popular, has a better signal-to-noise ratio.
Since flexibility was our highest priority, the primary advantage of a framework became a disadvantage for us. The upside of a framework is that it helps; the downside is that it helps.
For some situations, having decisions taken away from you by the framework can make everything easier. But for other situations, including ours, it was better to take on more work so we could get the flexibility we knew our users would want.
That all said, the main takeaway from realizing the extent of these tradeoffs is that, as founders and developers, we will eventually, inevitably, realize a decision had tradeoffs we didnt expect. Well think, mistakenly, that if we had only known everything in the beginning, then we would have made another choice. That can create a temptation to wait and plan and cower but instead, it should push us to build, ship, and iterate.
Way back in 2009, Jeff Atwood found the [perfect words](https://blog.codinghorror.com/version-1-sucks-but-ship-it-anyway/) and they still apply more than a decade later: “At the end of the development cycle, you end up with software that is a pale shadow of the shining, glorious monument to software engineering that you envisioned when you started.”
Imperfection is inevitable and gambling is necessary, meaning well always have wins and losses, pros and cons, and benefits and costs tradeoffs all the way down.
## Plot twist: We are React users after all
A different version of this post is a polemic, a hot take that says “Hey, you know the most beloved web framework in the past five years? It actually sucks.” But thats not true and thats not what I wanted to do.
I described a lot of reasons why I didnt choose React back in 2018 but even now, to be transparent, I tend to be skeptical of the dominance of web frameworks. Even then, agnosticism has to be central.
Authentik uses a framework for our documentation website that uses React. Im not opposed to using React elsewhere if it fits our needs best. Everything comes with tradeoffs and everything has a place.

View File

@ -1,66 +0,0 @@
---
title: "Fixed working hours are an outdated concept: 71% of HR leaders agree"
slug: 2023-05-12-fixed-working-hours-are-outdated
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- blog
- flextime
- working hours
- employees
- employers
- flexible hours
- job perks
hide_table_of_contents: false
---
Face it, it is difficult to write about high tech, IT-based, computer-centric jobs without feeling that a bit of privilege exists in this space. Many of us in the software industry have employers who are sympathetic to, or even promote, the concept of “flex-time” and other enticing perks.
It is a major perk, even a luxury, to not have to clock in at a specific hour and then somehow miraculously wrap up your work and clock out in exactly eight hours. An act as simple as stopping at a pastry shop before work, or taking an extra long morning walk, without fretting about the exact minutes on your watch, is a privilege… but one that IT workers are increasingly insisted on having.
<!--truncate-->
> \***\*71% of HR leaders believe the Monday-Friday, 9-to-5 workweek is outdated, according to a 2022 [survey](https://www.capterra.com/resources/flexible-work-time/).**
Its true that software companies in some countries are less amenable to flexible schedules and other relatively recents practices such as remote work or job-sharing, but the fact is that flexible working hours are still happening, a lot, even at the hard-core, old-school, corporate-style companies. And theres a reason for this; being human.
### “Ive got this!”
Humans prefer, inherently, to rely on our own instincts and analyses. When we feel empowered and trusted to work in the way that we feel is the most pragmatic, we tend to embrace the tasks in front of us with more enthusiasm and confidence (resulting in higher productivity). The opposite sensation, one of micro-management and lack of trust, freezes us in our tracks… and reduces productivity. Understandably, being dictated to about exactly _when_ one must do ones' various tasks implies a lack of trust.
> _Indeed, HR organizations are realizing that [strict work hours are a deterrent](https://www.capterra.com/resources/flexible-work-time/), and that the vast majority of employees will reject jobs that require a very specific start and end time._
Finishing a big project is rewarding, and when that goal is achieved, we are rewarded with a sense of accomplishment and self-approval (and hopefully recognition from your team and leadership). That feeling of success is what keeps us motivated; we value outcomes, the tangible deliverables, but we do not derive enjoyment from the actual time it took to complete the task. That is, we dont celebrate the hours and weeks of work, but rather the outcome.
We know that employees are happier and feel more valued when their managers measure performance based on outcomes, instead of the amount of time spent on a project or task, so it makes sense that so many companies are promoting a policy flexible working hours.
### Efficiency of cognitive optimization
Software developers, and many others in this field, rely on brain-power, brain-fitness, brain-agility, and frankly, on the willingness of our brains to cooperate with the task at hand. In reality, we are mostly at the mercy of our brains, and what they feel up to working on at any given moment.
> “_[Cognition is dynamic.](https://pubmed.ncbi.nlm.nih.gov/30266263/)_”
However, that dynamism can be harnessed and used to optimize our cognitive work. Being aware of what state our brains are in at the moment allows us to select tasks that are appropriate for the current cognitive “mood”. Feeling super-alert and deeply technical? Go ahead and dive deep to pump out a chunk of code for a new feature, or script a test plan, or refactor to solve a longstanding bug. Or, if you are feeling mentally exhausted but have excess energy, use that energy to do rote tasks that dont require much brain work. Or, as is sometimes the case with work that demands highly functioning cognitive effort, perhaps you are simply burnt out and unable to focus at all. Take a long walk, play a quick game, step away from your work and brew a second cup of coffee… log in late, log off early, and get back to it when your brain is ready.
While this might seem to be verging on irresponsible, using a flexible work schedule to your advantage can be a huge benefit, for both employee and employer. By playing into, and working collaboratively, with our own brains we can actually increase productivity, creativity, and innovation.
This skill of optimizing for when you work on what type of tasks can be considered as the antidote to the downsides and churn of intense multi-tasking. Recent studies have shown that doing too much multitasking at work can be counter-productive, because of the high “[switching costs](https://www.apa.org/topics/research/multitasking)”. If, instead of forcing our brains to frequently switch contexts and start the next task on the list, we first assess the current cognitive “mood” of our brains and then work on the types of tasks that align well with that mood, we can increase our productivity (and happiness).
### Reality of life
Its a welcome cliche nowadays to acknowledge that everyone has, at some point or another, “something stressful going on in their life”. This awareness of the reality of life challenges is yet another reason why flexible work schedules are considered humane perks, and why employers are wise to pragmatically acknowledge this and adjust their expectations.
> _“Peak productivity doesnt always align with traditional business hours.” ([source](https://www.capterra.com/resources/flexible-work-time/))_
Life isnt neat. There are school obligations, family needs, personal care, doctors appointments, and the list goes on. The reality of office hours, and daylight hours, is inflexible. Work hours, however can be flexible.
### Global team distribution
Here at [Authentik Security](https://goauthentik.io/), we are globally distributed with three different time zones in the US and two in Europe. Many companies, including large international companies, have worked with even more extreme time zone spread, for decades, so this model is proven.
This model of wide-spread working hours across teams is yet another pragmatic reason for implementing flexible working hours. Allowing European-based team members flexibility in choosing to start their work-day later in order to collaborate with US-based colleagues means that the European employees can have calm mornings focused on family or personal needs, while the US-based employees can start earlier and log off mid-afternoon. Or another alternative is implementing “split hours” where an employee works some hours in the morning, and some later in the day, with a longer break in the middle.
Ultimately, the ability of the employee to choose how best to get their work done, and when to work on what tasks, is both a luxurious perk and a pragmatic necessity, at least in the somewhat privileged world of software.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 889 KiB

View File

@ -1,92 +0,0 @@
---
title: "Join us for an authentik hackathon, 2023!"
slug: 2023-05-25-join-us-for-an-authentik-hackathon
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- blog
- hackathon
- collaboration
- doc sprint
- git
- GitHub profile
hide_table_of_contents: false
---
:::tip
We've published the Hackathon infos! See [here](https://goauthentik.io/developer-docs/hackathon/).
:::
We are thrilled to announce the first ever Authentik Security hackathon! The event will be online, over the course of a week in summer of 2023. More details about the exact days, registration form, and agenda are coming soon.
Yes, there will be swag and prizes and accolades, possibly even low-key Git-fame.
More importantly than Git-fame, a hackathon gives us all (authentik employees and our amazing community) a chance to connect and collaborate and learn from one another as we work with the authentik code base and documentation.
The summer-time schedule for this first authentik hackathon comes about 9 months after we announced the formation of our new company, Authentik Security, back in November 2022 in the blog “[Next steps for Authentik](https://goauthentik.io/blog/2022-11-02-the-next-step-for-authentik)”. We think that getting together with our incredible community, and our still new-ish development team here at Authentik, is a great next step in our journey!
<!--truncate-->
![alt_text](./hackathon-image.jpg "image_tooltip")
The magic of an organized hackathon is the ability to explore complex challenges in a collaborative, supportive environment, and really put into action the power of multiple brains. This environment fosters deep learning and stimulates trust and confidence… not to mention the potential for career-long connections and accomplishments.
For that, among many reasons, we hope you will join us for this first-ever authentik hackathon; come build something new with us and add another notch in your Git profile!
Well share more soon about specific goals, and functional areas of the code areas where we want to focus, though all ideas and input are welcomed.
## Hackathons: the ever-popular event!
Hackathons have been around a long time in the software world; an event put on by OpenBSD (a free Unix-like operating system) in 1999 is widely considered to be the first hackathon. This was followed closely by a hackathon put on by Sun Microsystems; their event was focused on engineers developing Java programs to run on the Palm, an internet-connected, handheld “personal digital assistant” (PDA).
Sponsorship from large tech companies continued to be the norm, but during the first decade of the 2000s the format, purpose, and typical attendees of hackathons evolved, with investors taking note of the incredible innovation and product-creation capabilities of hackathons. By the late 2000s, open source projects were a focus, with the power of the community becoming evident.
The popularity of hackathons does not seem to be slowing down at all, indeed they seem more prevalent than ever, and have surpassed the point of proving that collaboration in open source benefits all sectors of the software industry. Furthermore, as new developers seeking jobs realize the value of investing in their “contributor profiles” on GitHub and GitLab, and university classes promoting participation in open source projects, joining hackathons is a win-win deal.
## Behind the scenes
Theres a lot that goes into running a hackathon; entire companies now focus on doing this work!
Some fundamentals of a successful hackathon include:
- having a very clear agenda
- abundant over-communication
- easy-to-find and easy-to-follow instructions for sign-up and participation
- a live chat room where participants can ask questions and share ideas
- moderators in the repo to review and merge PRs
- daily check-in video conferences
Beyond these important basics, another important consideration is deciding which issues, features, or challenges to work on during the hackathon. Its fantastic to gather enthusiastic people to work together, but that energy needs to be focused and guided towards the contributions that will add the most value.
This focus on ideation (exploring and defining the main themes and ideas for the hackathon) should be one of the first steps of planning any hackathon.
> “Ideation is a crucial part of the hackathon journey because the primary focus of a hackathon is to enable problem-solving. You arent there just to write the best code but first to solve a problem that impacts people.” ([source](https://dev.to/appwrite/the-subtle-art-of-hackathon-ideation-1n99))
A typical process is to have some teams or individuals working on a mix of new features, others on known bugs, and others on popular enhancements. This provides participants a chance to do what they do best, be that writing new code or digging into debugging work.
## Dont forget about the … !
Hackathons arent just about code; theres also documentation, translations, website pages, and more.
Documentation is an important part of any software project, plus jumping into the docs is a great way for someone who doesnt code (or wants a break from coding) to still participate and contribute. Docs Sprints, also known as Docathons, have been around almost as long as hackathons. [Sarah Maddox](https://www.atlassian.com/blog/archives/come_join_us_in_an_atlassian_doc_sprint$) made Doc Sprints fun and famous in the early 2010s, managing to bring people together from across the globe for multi-day, chocolate-fueled sessions. Our own tech writer here at Authentik Security held a one-week [Doc Sprint in Kyiv, Ukraine](http://bhmarks.com/blog/ui-components-doc-sprint-hello-kyiv/) that resulted in a completely restructured book about UI Components.
For our first authentik hackathon, lets remember the docs and more; if the work you are doing for the hackathon means that the docs need to be updated, jump into the repo (same repo as the code!), or if you want to focus on the docs and help us improve and clean up our existing content, that would be great too. If you see translations that could be improved, visit our [translation project](https://explore.transifex.com/authentik/authentik/) at Transifex and submit your contributions.
## Input on the authentik hackathon event?
Wed love to hear from you all about what type of hackathon youd like to see us put on!
Heres a quick summary of our plans so far; let us know your preferences and ideas.
We are thinking of a multi-day event, with time for participants to get to know more about the project and have discussions about where we want to take the next set of features.
Kickoff will be on a Tuesday, where we will go over the agenda and instructions, answer any questions, and select which Issues to work on. Wednesday and Thursday dedicated to working on the PRs. On these working days, we will have a dedicated chat channel open, and a daily “check-in” video conference meeting.
Friday will be wrap-up, final polishing, and signups for demos Friday afternoon/evening and Saturday. We think having the hackathon extend into Saturday is a good way to give people time on weekend to demo if their weeks schedule is busy, but let us know your thoughts, please.
And back to the swag and fame… after the demos on Saturday, well either do a real-time vote amongst all participants to select the “most impactful” contributions, or conduct an online vote, with all votes due by the following Tuesday.
We are looking forward to hearing your thoughts, and to seeing you at the hackathon this summer. Reach out to us at [hackathon@goauthentik.io](mailto:hackathon@goauthentik.io) and join us on our **#hackathon23** [Discord channel](https://discord.com/channels/809154715984199690/1110948434552299673) with any suggestions or questions!

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

View File

@ -1,101 +0,0 @@
---
title: "Building Apps with Scale in Mind: Key Considerations and Strategies"
slug: 2023-06-13-building-apps-with-scale-in-mind
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- blog
- scalability
- app development
- sharding
- distributed systems
- performance
hide_table_of_contents: false
image: ./image1.jpg
---
When building apps with scale in mind, the fundamentals involve designing and developing applications in a way that allows them to handle increased user demand, larger data volumes, and growing functionality without compromising performance or stability. Scaling an application effectively requires careful planning, architecture design, and the use of scalable technologies. This blog will explore some key considerations and strategies for building apps for scalability.
![](./image1.jpg)
<!--truncate-->
The primary considerations when developing a scalable application include:
- Architecture, both of the application and the system on which it runs
- System scalability, such as horizontal scaling and containerization
- Database scaling: a scalable app depends on scalable data access
- Asynchronous processing, with message queues and background processing
- Performance optimization, for code and queries
- Fault tolerance and failover, to keep your application operational
Lets look more closely at each of these topics, and also discuss the use of Cloud infrastructure, Agile development technologies, as well as testing and monitoring your application.
## Modular and Scalable Architecture
A modular architecture with separate application components, such as the user interface, business logic, and data storage, allows for independent scaling of different modules based on demand. Two popular architectures for applications that need scalability are MVC and microservices.
### The Model-View-Controller Pattern
A classic architecture with a layered approach like the Model-View-Controller (MVC) pattern enables the scaling of individual components without affecting the entire app.The MVC architecture has been around for decades now, but remains popular and effective. Both Laravel and Spring use MVC, and Ruby on Rails, Angular,js, and Django use variations of MVC. For Django, the architecture is referred to as MVT (Model-View-Template), and the data written to the template comes from the model layer, and is “controlled” by the Django framework itself.
### Microservices Architecture
Another option is to adopt a microservices architecture, where each app component runs as a separate service with its own data storage, deployment, and scaling capabilities. This approach can provide greater flexibility and resilience, as individual services can be scaled, updated, or replaced without affecting the entire app. While microservices architecture offers several advantages, it also introduces certain complexities. Managing multiple services can be challenging, as it requires coordination and can result in increased network latency. Ensuring data consistency across different services is another common concern. Here, with authentik, we chose to use a [mono-repo approach](https://goauthentik.io/blog/2023-04-22-monorepos-are-great), but perhaps the most well-known example of successful (and necessary) microservices architecture is [OpenStack](https://www.openstack.org/).
## Horizontal Scaling for System Servers
The architecture of your system also matters. Horizontal scaling means adding more resources, typically of the same type, to the system. Rapid, even automated, horizontal scaling ensures that your app keeps running, no matter the demand. Plan for horizontal scaling. By planning ahead and adding more servers or instances, you can distribute the workload across multiple resources, ensuring your app can easily handle high traffic.
### Load Balancing
To achieve scalability, design your app to support load balancing and implement cutting-edge techniques like clustering or containerization. These strategies will help you respond quickly to sudden surges in demand and keep your app running smoothly. Imagine the benefits for your business when you can seamlessly handle significant increases in user traffic, and then scale back down when the extra resources are no longer needed. Focus on building a scalable app that can effectively handle varying levels of demand and remain prepared for any scenario.
## Clustering and Containerization
Clustering and containerization have revolutionized the way we build and deploy applications. With clustering, you can combine multiple servers or instances to work together seamlessly, providing an unparalleled level of fault tolerance and load distribution. And with containerization, deploying your app across multiple environments has never been easier. Simply package your app and its dependencies into a portable container, and voila - you're ready to scale up and deploy across multiple servers or instances. Both of these technologies are incredibly flexible, making them perfect for any project, big or small.
## Database Scalability
Just as your environment and system needs to be scalable, so does your underlying database. There have been massive technological leaps in database scalability; take advantage of the following practices to ensure rapid and consistent data access for your application.
### Distributed Databases and Sharding
With the vast amount of data and high traffic typical of modern applications, selecting a distributed database that provides replication and sharding, such as PostgreSQL, is crucial. By distributing data across multiple servers, you can handle large volumes of information without any hiccups. Sharding provides yet further data distribution; chunks of data are stored on different database tables, or nodes, and optionally on different machines. Automated sharding means that a single database or server never gets overloaded; the data load is smoothly distributed and performance of the application is not compromised.
## Caching Mechanisms
Caching is an effective way to boost the performance of your website or application. By implementing caching mechanisms such as Redis or Memcached, you can store frequently accessed data in memory and reduce the load on your database. This means that your users will experience lightning-fast page loading times and smooth and seamless interactions with your platform. Caching is relatively simple to configure and can make a world of difference to your application's performance.
## Asynchronous Processing
The use of asynchronous processing allows multiple tasks (retrieve data from a table, authenticate an ID, load an image) to process at a different time (not in synchrony) from each other, and not block another event. Obviously, removing chronological dependencies from as many tasks as possible can speed overall processing.
### Message Queues and Background Processing
By performing time-consuming or resource-intensive tasks asynchronously, you can create a more efficient and scalable app that won't slow down or crash. Utilize message queues such as RabbitMQ or Apache Kafka (or Redis' native message queueing, like we use here with authentik) along with background processing frameworks like Celery, to offload tasks to separate worker processes or services. Not only will this approach help maintain responsiveness even during peak usage, but it will also help your app scale as your user base grows.
## Performance Optimization
Obviously you should never overlook the importance of optimizing your application, both the code and database queries.
### Code and Query Optimization
By implementing techniques like caching, indexing, and query optimization, you can significantly improve response times and reduce resource usage. And that's not all - by utilizing monitoring and profiling tools, you can identify performance bottlenecks and optimize critical areas of your app for maximum efficiency. Results include faster load times, smoother user experiences, and a more streamlined and effective operation overall.
## Fault Tolerance and Resilience
Planning for failure scenarios and building fault-tolerant systems is a crucial aspect of app development. By implementing redundancy and failover mechanisms, you can ensure that your system remains operational even in the face of unexpected failures or disruptions.
### Redundancy and Failover Mechanisms
In addition to using distributed architectures, as we discussed above, another technique known as redundancy involves replicating data across multiple servers, and/or implementing backup systems. By spreading the workload and data across multiple resources, you eliminate single points of failure and increase system resilience. In the event of a failure, the workload can seamlessly shift to alternative resources, ensuring uninterrupted service.
It is also important to assess potential failure points and plan accordingly. By proactively planning for failure scenarios, you can minimize the impact of outages, avoid data loss, and maintain a smooth user experience. Building a fault-tolerant system is a critical step in creating a reliable app that can withstand unforeseen challenges and provide consistent service to users.
## Putting it All Together
Weve covered the primary strategies and techniques for building scalable applications, from architecture to asynchronous processing. By incorporating these principles and strategies into the app development process, you can build robust, scalable applications that can handle increased user demand, adapt to growing requirements, and provide a seamless user experience even as the application grows in size and complexity.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

View File

@ -1,66 +0,0 @@
---
title: "Demystifying Security: The Importance of SSO for Businesses of All Sizes"
slug: 2023-06-21-demystifying-security
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- blog
- security
- small business
- SMB
- SSO
- authentication
hide_table_of_contents: false
image: ./Demystify-Security.jpg
---
In today's digital world, security is a critical aspect of any organization's operations. While some may perceive security as an enterprise-level feature, it is essential for businesses of all sizes to prioritize and implement robust security measures. One of the most common security measures is to implement Single Sign-On (SSO), a digital authentication method that uses a single set of credentials to access multiple applications.
![](./Demystify-Security.jpg)
<!--truncate-->
Traditionally, SSO has been associated with large enterprises that manage numerous systems and applications across their infrastructure. SSO enables users to authenticate themselves once, and then gain access to multiple resources without the need to enter separate login credentials for each system. This streamlines the user experience, improves productivity, and simplifies user management for IT administrators.
However, even small companies should strongly consider adopting SSO. Small businesses may not have the same scale, resources, or complexity as large enterprises, but they still handle sensitive data and face real security risks. That's why implementing SSO for your users is advisable; and not only does it enhance a companys security posture, it also streamlines authentication processes for a more efficient workflow.
## **The Growing Need for SSO in Small and Medium-Sized Businesses (SMBs)**
In recent years, the number of cyber incidents targeting SMBs has increased exponentially. A [study by Cisco](https://www.cisco.com/c/dam/global/en_hk/assets/pdfs/cybersecurity-for-smbs-asia-pacific-businesses-prepare-for-digital-defense.pdf) showed that an alarming 74% of SMBs in India and New Zealand have already suffered a cyber incident in 2021 alone, with a whopping 85% of them losing customer information to malicious actors. This is not just a matter of protecting your business, but also your customers' sensitive data. Implementing security measures such as SSO can go a long way in safeguarding your business from cyber threats.
## **Why SMBs Are Ideal Targets for Cyberattacks**
As a small business grows, teams proliferate, leading to increased app usage. While this may be beneficial for productivity, it can also create more security vulnerabilities. Factors such as weak passwords, poorly maintained access management, and unauthenticated log in processes often lead to financial and reputational damage. Furthermore, SMBs often experience limitations in resources, time, and capital, complicating the security process even more.
SMBs also face additional threats from "bring your own device" (BYOD) policies. When employees connect their personal devices to the business network, the company has neither assurance nor control over how secure the device is.
### **Potential Consequences of Security Breaches for SMBs**
When an attack occurs, stolen data can be held for hefty ransoms or sold on the dark net. In some cases, hackers engage in doxing, releasing private and sensitive information about an organization or an individual. Business plan leaks and premature product releases, especially in the tech industry, can lead to a failed product launch. Moreover, recovery from a cyberattack requires rebuilding all systems from scratch. As a result of the reputational and financial damage, downtime, and revenue loss, many small businesses fail after a breach.
## **Key Benefits of Implementing SSO for SMBs**
1. **Enhanced Security:** The primary purpose of SSO is to provide better security for your business. A Google survey found that at least 65% of people reuse passwords across multiple sites. SSO provides the option to use one strong login credential for all apps, making it easier for users to access various personal and official apps every day. SSO can also be combined with stronger authentication tools like MFA and security policies to better protect your organization.
2. **Improved Workplace Experience:** In small and growing businesses, employees often have to wear many hats. On average, 27% of small businesses use an average of five apps per day, and this number grows as the business grows. SSO reduces the probability of breaches due to weak and recycled passwords, enabling employees to switch between apps smoothly with fewer security issues.
3. **Prevention of Financial Damage:** According to the [2021 Cost of a Data Breach report](https://www.ibm.com/reports/data-breach?utm_content=SRCWW&p1=Search&p4=43700075239447413&p5=p&gclid=Cj0KCQjwnMWkBhDLARIsAHBOfto1bgotBgTKWKYpgi3BUghKaNcUrHV69CGbLHAjMD6PjwDp7Kuv3yQaAsDHEALw_wcB&gclsrc=aw.ds) by IBM and the Ponemon Institute, the average cost of a data breach for small organizations (fewer than 500 employees) increased from $2.35 million to $2.98 million between 2020 and 2021. By reducing the chances of a password breach, SSO provides SMBs with essential protection.
4. **Time Savings:** Not only does using SSO save time for employees by reducing the amount of time spent typing in passwords or resetting them, but it also helps eliminate downtime caused by data breaches. With SSO, employees can focus on their work without worrying about cybersecurity threats, which is a huge relief in today's tech-driven world. By implementing SSO, companies can streamline their workflow and create a more efficient work environment.
## **Essential Features in an SSO Solution for Small Businesses**
When considering implementing SSO, small organizations should look for the following features:
1. **Scalability**: When considering an SSO solution for your organization, it is essential to look for one that offers features capable of supporting your business' growth. As your company expands, your authentication needs may change, so it's crucial to find an SSO solution that can easily adapt to those evolving requirements. Additionally, flexibility in subscription plans is vital, especially for small businesses. Opting for a "pay-as-you-go" option allows you to assess the software's suitability and effectiveness before making a long-term commitment. This way, you can ensure that the chosen SSO solution aligns perfectly with your organization's needs without locking you into a potentially unsuitable arrangement.
2. **Extensive Catalog of Apps**: When selecting an identity provider for your SSO solution, it is crucial to consider its integration capabilities. An ideal identity provider should offer seamless integration with a wide range of service providers from different vendors. This integration flexibility enables you to connect your SSO solution with various applications, simplifying the process of switching or adopting new software without the need for a completely new SSO application. By choosing an identity provider with extensive integration options, you can maintain a centralized and consistent authentication experience across your organization, regardless of the diverse set of applications you may utilize. This adaptability ensures that your SSO solution remains agile and can easily accommodate future changes in your software ecosystem.
3. **Mobile Application:** An SSO mobile app is crucial for remote work or jobs that require frequent travel. With more and more people working from home or traveling for work, it's become increasingly important to have a secure way of accessing company resources and maintaining identity security while you are on-the-go. An SSO mobile app does just that. You'll be able to access all your important files, tools, and applications with ease, without having to worry about any security breaches.
4. **Ease of Use, Intuitive UI, Solid Documentation, and Support:** When considering an SSO solution, it is crucial to prioritize ease of use, setup, and maintenance. Look for a provider that offers an intuitive user interface (UI) and straightforward navigation, as these factors greatly contribute to user adoption and efficiency. A well-designed UI reduces the learning curve and ensures a seamless experience for both administrators and end-users. Additionally, comprehensive and well-written help guides or documentation are valuable resources for troubleshooting and self-service support. It's also essential to select an SSO provider that offers a responsive and knowledgeable support team. Having access to timely assistance and expert guidance can significantly expedite issue resolution and minimize downtime. By choosing an SSO solution with user-friendly features and robust support, you can ensure a smooth implementation and ongoing management, making it easier for your organization to derive maximum value from the solution.
5. **Caters to SMB Needs:** While there are numerous SSO solutions available in the market, it is important to recognize that many of them are designed with larger enterprise companies in mind. Consequently, the specific needs and requirements of small businesses are often overlooked. However, for small businesses, an SSO solution that is easy to implement and tailored to their unique needs is crucial. It is essential to find an SSO provider that understands the challenges faced by small businesses and offers a solution that can be seamlessly integrated into their existing infrastructure. Such a solution should be cost-effective, scalable, and provide a streamlined user experience without overwhelming complexity. By choosing an SSO solution that caters to the specific features small businesses require, organizations can enhance their security, efficiency, and productivity while ensuring a smooth and hassle-free implementation process.
## **Final Thoughts**
The phrase "prevention is better than cure" is particularly true when it comes to business security - and protecting your business identity is no exception. It's crucial to educate yourself about potential risks and have a contingency plan in place to save yourself time, money, and of course, stress.
Single Sign On is an excellent start to protecting your business identity from potential threats. By treating security as an integral part of their products and services, companies can uphold ethical standards, provide a safer user experience, and minimize the risk of security breaches.

View File

@ -1,94 +0,0 @@
---
title: "Microsoft has a monopoly on identity, and everyone knows it except the FTC"
slug: 2023-07-07-Microsoft-has-a-monopoly-on-identity
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- blog
- monopoly
- FTC
- Microsoft
- bundling
- Active Directory
- identity management
- authentication
hide_table_of_contents: false
---
The FTC (Federal Trade Commission) punished Microsoft for exerting its power in 2001, but Microsoft learned to hide its power, especially when Satya Nadella took over from Steve Ballmer and pursued a services model that builds and leverages power while maintaining plausible deniability.
At Authentik, weve seen the monopolistic powers that Microsoft has over the identity management sector, but identity is a canary in the coal mine for a much wider, much stronger monopoly.
<!--truncate-->
## How software ate the monopoly concept
In the FTCs definition of [monopolization](https://www.ftc.gov/advice-guidance/competition-guidance/guide-antitrust-laws/single-firm-conduct/monopolization-defined), Microsoft is the only company and case cited by name. Ironically, Microsoft has grown even more since that case. The FTC has caught on to how powerful their technology is, and even threatened to reverse Facebook and Instagram acquisition, but hasnt updated its understanding of what a monopoly can be.
And because of that misapprehension, Microsoft has, ironically, become the tech industrys preeminent monopoly.
### Microsofts three landmark antitrust cases
The following three cases that the FTC brought against Microsoft highlight not a willingness to bust monopolies, but rather the evolution of Microsoft learning how to play the game of monopolization.
- [2001](https://www.thebignewsletter.com/p/microsoft-brings-a-cannon-to-a-knife): The FTC blocked Microsofts acquisition of Intuit and sued the company for bundling Internet Explorer with PCs.
- Microsoft learned its lesson: Become friends with the government, position itself as the friendly tech company, and focus on leveraging monopolistic powers rather than seizing actual monopolies.
- Thanks to how the FTC focuses on [consumer welfare](https://www.thebignewsletter.com/p/facebook-hits-1-trillion-in-market), Microsoft learned it could maintain plausible deniability.
- [2020](https://siliconangle.com/2023/01/25/report-eu-preparing-launch-antitrust-investigation-microsoft-teams/): Slack filed an antitrust suit against Microsoft focusing on Teams
- Microsoft offered to charge for Teams ([source](https://www.google.com/search?q=antitrust+slack+salesforce&sourceid=chrome&ie=UTF-8#ip=1:~:text=Microsoft%20offers%20to,com%20%E2%80%BA%20articleshow)) and unbundle Teams from Office ([source](https://www.theinformation.com/briefings/microsoft-to-separate-teams-office-products-under-eu-antitrust-scrutiny))
- Despite a probe, Salesforce completed the acquisition of Slack ([source](https://slack.com/blog/news/salesforce-completes-acquisition-of-slack))
- Slack selling to Salesforce was likely a “[defensive merger](https://www.thebignewsletter.com/p/an-economy-of-godzillas-salesforce)” and Microsofts flexibility shows its willing to sacrifice a short-term monopoly for the long-term payoff of retaining monopolistic effects.
- 2023: The UK government ([source](https://www.thebignewsletter.com/p/big-tech-blocked-microsoft-stopped)) and the FTC ([source](https://www.thebignewsletter.com/p/ftc-to-block-microsoft-activision)) blocked Microsofts acquisition of Activision.
- On the one hand, this shows that governments are still watching Microsoft.
- On the other hand, this shows governments are likely still limiting antitrust actions to blocking major acquisitions instead of proactively breaking up companies and product suites.
### Horizontal versus vertical monopolies
Much of our societal and legal understanding of monopoly is based on a two-dimensional model: monopolies are either vertical, meaning that a company integrates and controls a supply chain, or horizontal, meaning that a company acquires and controls so much of an industry that it can then exert undue power.
When the FTC made its 2001 case against Microsoft, the entire company was worth [$28 billion](https://www.statista.com/statistics/267805/microsofts-global-revenue-since-2002/). Now, the identity and access industry alone was worth [$13 billion in 2021](https://www.globenewswire.com/news-release/2023/01/19/2591625/0/en/Identity-and-Access-Management-Market-Size-Worth-USD-34-52-Billion-by-2028-Report-by-Fortune-Business-Insights.html) and will be worth almost $35 billion by 2028.
The implicit model of the FTCs view of monopoly is zero-sum, meaning that there is a limited amount of industry and economy to control. But software has proven that the economy is _positive-sum_, meaning that new businesses can grow without necessarily supplanting other businesses and that new industries can emerge from inside companies (e.g. AWS from Amazon).
## Monopoly via bundling
Microsoft develops and acquires services that it then groups, or “bundles”, into one package, while also offering the same services as standalone products on other devices (for example, Office on iPad and Windows on Occulus).
Microsoft does indeed “bundle”, in business terminology, by offering a suite of products and services that are greater (and cheaper) than the sum of their parts. But Microsoft is careful not to, in _legal_ _terminology_, exclusively bundle, because a bundle that involves selling one product only on the condition of buying another product is a form of “[tying](https://www.justice.gov/archives/atr/competition-and-monopoly-single-firm-conduct-under-section-2-sherman-act-chapter-5#:~:text=Tying%20occurs%20when%20a%20firm,what%20could%20be%20viewed%20as),” which is illegal.
### Microsoft Teams
Microsoft Teams, their collaboration and chat platform, has become the latest anchor point for the Microsoft ecosystem. Earlier cornerstone products for Microsoft were their personal computers (PCs), the Windows operating system, and the Microsoft Office suite.
[Ben Thompson](https://stratechery.com/2022/thin-platforms/), a former Microsoft employee who writes about technology, business strategy, and the internet, has shared some interesting insights on his website [Startechery](https://stratechery.com/2022/thin-platforms/):
“_This is exactly what Microsoft would go on to build with Teams: the beautiful thing about chat is that like any social product it is only as useful as the number of people who are using it, which is to say it only works if it is a monopoly — everyone in the company needs to be on board, and they need to not be using anything else. That, by extension, sets up Teams to play the Windows role, but instead of monopolizing an individual PC, it monopolizes an entire company._”
_“A thin platform like Teams takes this even further, because now developers dont even have access to the devices, at least in a way that matters to an enterprise (i.e. how useful is an app on your phone that doesnt connect to the companys directory, file storage, network, etc.). That means the question isnt about what system APIs are ruled to be off-limits, but what “connectors” (to use Microsofts term) the platform owner deigns to build. In other words, not only did Microsoft build their new operating system as a thin platform, they ended up with far more control than they ever could have achieved with their old thick platform._
### Identity, access, and security in the ecosystem
The power of the ecosystem plays a huge role in creating and retaining a monopoly (whether it is legally defined as a monopoly or simply works as one). A series of services that exert monopolistic control, from Microsoft managed-devices, to entire suites of products supported and integrated via Teams, results in undue control simply due to the pressure of the preceding ecosystem.
One glaring example is how Microsoft makes it easy to adopt Active Directory, giving admins a central directory of users and offering users an SSO portal for all Microsoft services. Microsofts monopolistic power is demonstrated by the companys ability to shift customers from AD to Azure AD. As customers make that shift, Microsoft makes more money (because Azure is more profitable) and Microsoft can more easily exclude other non-Microsoft options.
## A monopoly with plausible deniability
Microsoft has a particular advantage in the world of infrastructure, administration, and security services because end users often dont know better options are out there and admins are rarely empowered to advocate for a better than good enough solution.
Microsoft learned its lesson from [letting Internet Explorer languish](https://stratechery.com/2022/thin-platforms/) and letting Firefox catch up now, Microsoft ensures its services are at least “good enough.” “Good enough,” however is questionable. The on-prem version of Active Directory, for example, requires a lot of configuration to make it secure.
William Wechtenhiser argues that Microsoft has an unseen [Boeing 737 Max style crash every week](https://www.thebignewsletter.com/p/does-microsoft-have-a-boeing-737).
“_According to CVE Details Microsoft disclosed an average of 225 security vulnerabilities per year between 1999 and 2014. What did Microsoft do to address this? They dismantled their testing processes and then, when this predictably led to a really bad day, they decided to stagger their releases so that their users could do more of the testing they themselves were no longer doing. As a result the average number of vulnerabilities has increased to 627 per year in the 5 years since. Microsoft looks exactly like a company run by financiers focused on short-term gains with no fear of legal consequences and no competition in the market._”
Microsoft isnt the only identity option but the industry has [consolidated](https://www.thebignewsletter.com/p/monopolies-and-cybersecurity-disasters) with Okta acquiring Auth0 and Thomas Bravo (which already owned Ping Identity) acquiring ForgeRock.
Slack is the ultimate example of Microsofts power. Slack had all the advantages identity and securities services dont offer, and even yet, Microsoft Teams rapidly overtook Slack and Slack sold to Salesforce (likely as a defensive merger).
## And yet, opportunities remain
Despite all the bundling weve discussed so far, we believe that there is expansive opportunity in the identity management space. The opportunity we see is to offer an authentication product that serves all of a companys authentication needs, from business users to consumers. We think the opportunity is viable because access is becoming an ever more important focal point for security even as Microsoft and Okta suffer breach after breach.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

View File

@ -1,64 +0,0 @@
---
title: "July authentik hackathon!"
slug: 2023-07-11-july-authentik-hackathon
authors:
- name: Tana Berry
title: Sr. Technial Content Editor at Authentik Security Inc
url: https://github.com/tanberry
image_url: https://github.com/tanberry.png
tags:
- blog
- hackathon
- docathon
- team work
- code
- authentik
- git
- github profile
hide_table_of_contents: false
image: ./image1.jpg
---
> Here at Authentik Security, we are serious about your online security and our work… and we are also serious about our first ever authentik hackathon!
We described our upcoming inaugural hackathon in an [earlier blog](https://goauthentik.io/blog/2023-05-25-join-us-for-an-authentik-hackathon), and even built a dedicated [web page](https://goauthentik.io/developer-docs/hackathon/) for it, but now I want to break down some of the key reasons you should consider joining us on July 26 through July 30!
![](./image1.jpg)
<!--truncate-->
### Come on, try it, youll like it!
First and foremost, we want to welcome those for whom this is their first hackathon, and ensure that ours is a great start to a long-lived participation in hackathons. We warn you, hackathons and the community spirit of building together can be addictive. For me, they have vibes of the trail racing community that I love so much; doing hard stuff, together… and signing up for the next one as soon as one is over.
One of the most compelling experiences I have had was at a hackathon in Las Vegas (Yes, known world-wide as the city of hackathons) when we added in a Docathon option; the significant increase in the number of people who signed up that year (as opposed to the previous) made it obvious that our industry knows that there is more to software than code.
We had the CEO of a Finnish company add a lot of conceptual topics to our documentation, and a Marketing professional create her first-ever PR to clarify some of our web pages. This was a two-day, weekend event, leading into a big e-commerce conference that kicked off on Monday, and the sense of open collaboration and common celebration over “little things” (like ones first PR!) and big things (a major new feature built by a team) was a wonderful way to get the whole event started!
All of us in the software world want to build (and sometimes break and rebuild) things. Having an event that makes it easy for everyone to use their expertise and add to the project is at the core of a successful hackathon. So what are your super-powers, your interests and abilities, and how will you apply them at the hackathon?
You can take a look at our [open Issues](https://github.com/goauthentik/authentik/issues) and see if any of them speak to you; maybe you have encountered a similar issue and want to find the answers, or maybe you have the answers and want to throw into a PR. Also, you can open a new Issue and add the `hackathon` label; we are sure you have your own ideas, too!
During the Kickoff video-conference on July 26th, we will spend some time identifying which existing Issues participants want to work on, which new ones need to be created, and if there are larger, more complex Issues that a team can be formed to tackle together.
Either way, there will be fun and challenging hacking to be done, truly something for everyone, from code to documentation to website pages, and the glue in between them all.
### And for you seasoned hackers…
As a [still newish company](https://goauthentik.io/blog/2023-03-23-whats-new-with-authentik-march-2023) and as a new team, this is our first hackathon [together], too! We have all participated in hackathons before at some level, but not yet put on one for [authentik](https://github.com/goauthentik/authentik), our SSO authentication project.
So if you have tons of experience, or even a modicum, with hackathons then come join us and help make it a huge success. We welcome your input and assistance with reviewing PRs, helping other participants get up and running, and moderating incoming questions and suggestions.
### You are more than your Git profile
While we do recognize the truth of this title, we also know that its true that our Git repository profiles are, errrr…. often observed. Take this opportunity to add some more green squares and repos to your profile just in case someone is looking, and have fun learning more about authentik, authentication, and application security while doing it!
### Oh, there are prizes, good prizes!
Sure, we got swag. Specifically, we have awesome authentik-branded socks. Personally I love some good sock swag; useful, fun, a bit nerdy.
But we also have hard, cold cash; a prize pool of $5000, to be divided up amongst the winners after our all-participants voting is completed.
### Review the agenda, and sign up!
Have I convinced you yet? Take a look at the [agenda](https://goauthentik.io/developer-docs/hackathon/#agenda) and use the easy registration form to sign up. Were looking forward to seeing you there, and in the mean time, send any questions to have to [hackathon@goauthentik.io](mailto:hackathon@goauthentik.io) or chat with us on [Discord](https://discord.com/channels/809154715984199690/1110948434552299673).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 180 KiB

View File

@ -1,153 +0,0 @@
---
title: "Multi-user locale management using Lit"
slug: 2023-07-20-multi-user-locale-management-using-lit
authors:
- name: Kenneth Sternberg
title: Sr. Frontend Developer at Authentik Security Inc
url: https://github.com/kensternberg-authentik
image_url: https://github.com/kensternberg-authentik.png
tags:
- blog
- lit
- multi-user
- translation
- locale
- authentik
hide_table_of_contents: false
image: ./image1.jpg
---
[Lit](https://lit.dev/) comes with its own library to help your app support multiple written languages. Lit's localization feature updates the page automatically when a language is switched during a browser session, but the documentation does not describe **how** you can switch languages.
Let's dive into how you might do that.
![](./image1.jpg)
<!--truncate-->
## Lit and Localization
Lit is a library from Google that enables the construction of fast, reactive web applications by leveraging the browser's own component model and event handling rather than imposing it from the outside as React, Angular, and other application development platforms do. Lit's [Web Components](https://modern-web.dev/) are fast, efficient, reactive, and comply with the [actual standards](https://developer.mozilla.org/en-US/docs/Web/API/Web_components) for how browsers should behave as application platforms.
Lit's only failing is that the standards came late and actual standardization on that behavior across Chrome, Firefox, Safari, and Edge didn't complete until early 2020. By that time React had been out for seven years and had a massive share of the market.
Lit has an effective [localization library](https://lit.dev/docs/localization/overview/) that supports both a static and a dynamic mode. Most developers I've spoken with prefer the dynamic mode, because it affords the user the option of changing languages without having to reload the page. We will use that mode.
## Basics of Lit Localize
The basics of Lit's localization workflow are as follows:
- Build your app, wrapping every text string you'll want the user to see in a `msg("Your text here")` function wrapper.
- Specify a `lit-localize.json` file, specifying the language you use as the source locale, and providing a list of target locales you want your application to support.
- Run `lit-localize extract`, which will extract all the `msg()` blocks and update your catalog of locale files, one separate file per target locale. If you've done this before, any already translated strings will not be discarded or overwritten.
- Send your locale files to your translators. When you get a translated file back, replace the existing file with it.
- Run `lit-localize build`, which will then build the translation files (in JavaScript or Typescript, depending on your project settings) that the `msg()` blocks will then display in the user's language (if it's available).
When your web application starts up, the top-level context must run the library's `configureLocalization()` function, which takes three arguments: the source language's [locale code](https://www.loc.gov/standards/iso639-2/php/code_list.php), the list of available target languages, and a function that asynchronously loads the locale file for the specified target. It returns two functions, `getLocale()` and `setLocale(localeCode: string)`.
Now, any Lit web component in your application with the `@localized()` decorator will update immediately and automatically with a new language.
## Managing the Locale context
But what if we want to update the language dynamically? What if your customer enters your site and then specifies that they want, say, French instead of English? The Lit Localize library doesn't cover that, so let's do that ourselves.
Let's create a [Lit context](https://lit.dev/docs/data/context/). That's straightforward enough:
```typescript
// ./localize/context.ts
import { createContext } from "@lit-labs/context";
export const localeContext = createContext<string>("locale");
export default localeContext;
```
All we're storing in this is the string for the locale. There are any number of places where the locale request could come from: the user's browser setting, the URL, a configuration setting from the server, the default fallback. Once we have the context and the `configureLocalization()`function, we need to preserve and update that context. Here's what the top of that context object looks like:
```typescript
@customElement("locale-context")
export class LocaleContext extends LitElement {
@provide({ context: locale })
@property()
locale = "en";
constructor() {
super();
const [getLocale, setLocale] = configureLocalization();
this.getLocale = getLocale;
this.setLocale = setLocale;
this.updateLocaleHandler = this.updateLocaleHandler.bind(this);
}
connectedCallback() {
super.connectedCallback();
window.addEventListener('custom-request-locale-change', this.updateLocaleHandler);
this.setLocale(this.locale);
}
disconnectedCallback() {
window.removeEventListener('custom-request-locale-change', this.updateLocaleHandler);
super.connectedCallback();
}
updateLocaleHandler(ev: Event) {
this.updateLocale((ev as CustomEvent).detail.locale);
ev.stopPropagation();
}
render() {
return html`<slot></slot>`;
}
```
This is fairly boiler-plate. When this component is constructed by the browser it loads the locale and sets up the update handler. Because the update handler runs in the context of an event handler, we make sure to `.bind()` it to the instance that it will need to access. When this component is connected to the browser, it will have access to the requested locale specified when it becomes part of the DOM, so we call `setLocale()` at that moment.
The `as CustomEvent` cast there is just for show; please do something smarter with an [assertion function](https://blog.logrocket.com/assertion-functions-typescript/).
The only oddity is at the top: `@provide({ context: locale })` comes from Lit's context library. It turns the object field associated with it into a context manager, and any child objects contained within this context will get **that** value, and no other, if they import and examine the context object. Attach a `@consume({ context: yourcontext })` decorator to a `@property` or `@state` field, and any Lit component will react to the change of context no matter how far up the tree it is with a re-render.
And finally, we don't actually want to do anything visually interesting with our context, we just want to supply the data and manage it, so our application returns an empty `<slot></slot>` object into which we put the rest of our application. Slots are rendered in the context of the [LightDOM](https://lit-element.readthedocs.io/en/v0.6.4/docs/templates/slots/), so any of your content wrapped in our `<locale-context locale="fr"><your-content></your-content></locale-context>` will have access to the full browser environment.
A few things are **not** specified in this example; if you want this object to be able to go through that list above of sources-of-truth for the current locale on startup rather than use the `@property` string, you'll need more code in the `connectedCallback` that what I've done there.
The reason we preserve `getLocale()` and `setLocale()` here is that Lit Localize's library is a singleton; if you run `configureLocalization` twice in the same browser session it throws an exception. So we make sure to run it once and preserve its localizing powers.
With all that in mind, the actual `updateLocale` library is easy:
```typescript
updateLocale(code: string) {
if (this.getLocale() === code) {
return;
}
const locale = getBestMatchLocale(code);
if (!locale) {
console.warn(`failed to find locale for code ${code}`);
return;
}
this.setLocale(locale)
}
```
I won't provide the function `getBestMatchLocale`; it takes the requested locale code you pass it and returns an object containing the path to the locale file, the exact code you want to instantiate, and a label for the language such as "French" or "English" or "Chinese (Traditional)".
It uses a prioritized table of regular expressions so that, for example, a request for `fr_FR` will be mapped to the `fr.ts` language file.
Remember that **you** supply the loader function to `configureLocalization()`, so it can take anything you want; I chose for it to take that object. If the file is not already present when you call `setLocale()`, it loads it. When that file is available, it then issues a message causing all Lit Web Components on the page decorated with the `@localize()` class decorator to request a re-render with the new language strings.
## Changing the Locale Context
This is the easiest part. I mentioned that `getBestMatchLocale` has a table with the code, a regex matcher, the human-readable label for the language, and the import instructions; you can now use that table to create a `<select>` box anywhere in your application with the label for text and the locale code for a value.
When the user makes a selection, your component just needs to send an event:
```javascript
this.dispatchEvent(
new CustomEvent("custom-request-locale-change", {
composed: true,
bubbles: true,
details: { locale: requestedCode },
}),
);
```
...and that's it. The top-level context will receive this event and attempt to load the requested locale. If that works, it will fire off a re-render request and all your text will be updated with the new language pack.
## What We've Done
We've described and implemented a context manager that associates your Lit Web Component user and application settings with Lit's own localization library. We have provided an event listener to that context manager so that changes to the locale string will dynamically update your Lit application's displayed text in the language requested. Under the classic rule of "A class should have only one reason to do its thing, and it should do its thing well," this fits the bill: that one reason is "the locale **string** changed." We've seen how to apply localization to all our own components via the `@localized()` decorator, and we've described how we might display the list of locales and shown how a locale change request is sent to the context manager.
You now have the tools you need to provide your Lit application to customers around the world in their own language. May you find a million new customers who don't speak your language!

Binary file not shown.

Before

Width:  |  Height:  |  Size: 96 KiB

View File

@ -1,123 +0,0 @@
---
title: "Securing the future of SaaS: Enterprise Security and Single Sign-On"
slug: 2023-07-28-securing-the-future-of-saas
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- blog
- SSO
- security
- MFA
- JWT
- Enterprise
- security
- cybersecurity
- identity management
- authentication
hide_table_of_contents: false
image: ./image1.jpg
---
In today's digital landscape, businesses of all sizes increasingly rely on Software as a Service (SaaS) to streamline their operations and improve overall efficiency. However, as the adoption of SaaS applications continues to rise, so too do the security risks associated with these platforms. This has led to a growing demand for robust enterprise security features and Single Sign-On (SSO) solutions that can effectively safeguard sensitive data and protect businesses from cyber threats. In this blog, we'll delve into the intricacies of enterprise security, explore the benefits of SSO for businesses of all sizes, and examine the role of automation in ensuring robust security. We'll also discuss the importance of building SaaS apps with scalability in mind and highlight Authentik Securitys solution, [authentik](https://goauthentik.io/), as a unified authentication tool to help secure your organization.
![](./image1.jpg)
<!--truncate-->
## The importance of enterprise security in SaaS applications
The increasing reliance on SaaS applications has made it more critical than ever for organizations to prioritize their security. With cyber threats on the rise, businesses must ensure that their SaaS platforms are protected from many risks, including data breaches, compromised credentials, and malicious insiders.
### Data breaches and their impact on businesses
Data breaches can have severe consequences for businesses, ranging from financial losses and reputational damage to legal implications. In recent years, [supply chain attacks](https://goauthentik.io/blog/2023-04-07-supply-chain-attacks-what-we-can-all-do-better) have surpassed malware-based attacks by 40%, highlighting the need for stringent security measures to protect sensitive data.
One of recent most significant supply chain attacks was the SolarWinds incident in 2020. In this attack, hackers inserted malicious code into the company's software updates, which were then unknowingly distributed to thousands of customers. This is a stark reminder of the potential dangers associated with inadequate enterprise security in SaaS applications.
### The role of SSO in enhancing enterprise security
Single Sign-On (SSO) solutions have become an essential component of enterprise security, providing users a convenient and secure way to access multiple applications with a single set of credentials. By streamlining the authentication process and reducing the need for multiple passwords, or better yet removing passwords and instead using tokens, [JWTs](https://goauthentik.io/blog/2023-03-30-JWT-a-token-that-changed-how-we-see-identity), MFA, and other more secure methods, SSO can help minimize the risk of compromised credentials and improve overall security.
SSO solutions can also provide additional layers of authentication within an organization's software ecosystem. This can include secure handshakes between components using internal-only tokens, as well as machine-to-machine authentication using internally generated JWT tokens.
By implementing SSO as part of a comprehensive security strategy, businesses can better safeguard their sensitive data and protect themselves from the growing threat of cyber attacks.
## Good security can't be automated: the human element in SaaS security
While automation has revolutionized many aspects of modern business, it also presents new challenges when it comes to cybersecurity. As businesses become increasingly reliant on automated processes, they can become complacent and overlook potential security risks in their SaaS applications.
### The dangers of over-reliance on automation
Automation has undoubtedly made it easier to update and manage software, but it has also created new opportunities for cybercriminals to exploit. With so many processes now automated, it's easy for businesses to overlook the security implications of their actions and inadvertently expose themselves to cyber threats.
For example, the widespread use of automatic updates can make it easier for hackers to insert malicious code into software packages, as seen in the SolarWinds attack. To mitigate these risks, businesses must strike a balance between automation and human oversight, ensuring that their security measures are not entirely reliant on automated processes.
### The role of human expertise in cybersecurity
While automation can streamline many aspects of security, there's no substitute for the expertise and vigilance of human security professionals. By combining automated processes with human oversight, businesses can more effectively identify and address potential security risks in their SaaS applications.
Some key areas where human expertise can make a difference in cybersecurity include:
- Monitoring and analyzing security events: By regularly reviewing logs and other security data, human security professionals can identify potential threats and take appropriate action.
- Managing privileged accounts: Ensuring that access to sensitive systems is strictly controlled and regularly audited can help prevent unauthorized access and minimize the risk of insider threats.
- Implementing robust authentication and access controls: By combining SSO with other security measures, such as multi-factor authentication (MFA) and role-based access control (RBAC), businesses can create a more secure environment for their users.
By embracing a multi-layered approach that combines automation with human expertise, businesses can create a more secure environment for their SaaS applications, minimizing the risk of data breaches and other cyber threats.
## Building SaaS apps with scale in mind: preparing for growth and expansion
As businesses grow and evolve, so too do their security needs. Building SaaS applications with [scalability in mind](https://goauthentik.io/blog/2023-06-13-building-apps-with-scale-in-mind) ensures that they can accommodate this growth and continue to provide robust security as the organization expands.
### Designing for scalability from the ground up
When developing a SaaS application, it's essential to consider how the app will adapt to changing security requirements as the business grows. This involves designing the app's architecture and infrastructure with scalability in mind, ensuring that it can easily accommodate new users, features, and integrations.
Some key considerations for building scalable SaaS applications include:
- Modular design: By breaking the app down into smaller, reusable components, it's easier to update and expand the app as needed.
- Flexible infrastructure: Ensuring that the app's infrastructure can scale to accommodate increased demand helps prevent performance bottlenecks and other issues.
- Integration with existing systems: Designing the app to work seamlessly with other tools and platforms can help streamline security management and reduce the risk of compatibility issues.
By taking a proactive approach to scalability, businesses can create SaaS applications that are better equipped to handle the security challenges of a growing organization.
### Authentik Security: a scalable solution for SaaS security
Authentik Securitys flagship product, [authentik](https://goauthentik.io/), is a unified authentication platform designed to help businesses protect their applications and safeguard sensitive data. With features such as SSO, IAM, and advanced authentication capabilities, Authentik Security provides a comprehensive security solution that can scale with your organization's needs.
Some of the key benefits of authentik include:
- Seamless integration with popular identity providers: authentik supports a wide range of identity providers, making it easy to integrate with your existing systems.
- Customizable session duration: authentik allows you to configure session durations to suit your organization's security requirements.
- Machine-to-machine authentication: authentik uses JWT tokens for secure communication between components, providing an additional layer of security.
- Robust access controls: With features such as RBAC, authentik enables businesses to implement strict access controls and minimize the risk of unauthorized access.
By choosing a flexible and scalable security solution like authentik, businesses can ensure that their SaaS applications are well-equipped to meet the security challenges of today and tomorrow.
## Embracing a collaborative approach to cybersecurity
In the face of ever-evolving cyber threats, businesses need to adopt a collaborative approach to cybersecurity. By working together and sharing information about known or suspected attacks and vulnerabilities, businesses can better protect themselves and their customers from the risk of data breaches and other cyber attacks.
### Building strong relationships with software vendors
To enhance their security posture, businesses should foster close relationships with their software vendors. This includes requesting Software Bills of Materials (SBOMs) to better understand the components used in their software and practicing "dependency vendoring" to ensure that all third-party tools and libraries are thoroughly reviewed by their security team.
In addition, businesses should stay informed about security updates, release notes from their vendors, and ensure that any security-related patches are promptly applied.
### Encouraging collaboration and information sharing
By embracing a culture of collaboration and information sharing, businesses can better protect themselves and their customers from cyber threats. This involves actively participating in industry forums and communities, sharing information about known or suspected attacks and vulnerabilities and seeking advice from fellow security professionals.
At the same time, businesses must also be willing to listen to the collective wisdom of the cybersecurity community and take on board the advice and insights provided by their peers.
### Investing in human expertise and in-house tooling
While automation can play a valuable role in security, it's essential to recognize the importance of human expertise and in-house tooling in protecting your organization's SaaS applications. By investing in skilled security professionals and developing custom security tools tailored to your business's unique needs, you can better safeguard your sensitive data and protect your organization from cyber threats.
## The future of SaaS security
As the adoption of SaaS applications continues to rise, businesses must prioritize their security and take proactive measures to protect their sensitive data. By implementing robust enterprise security features, such as SSO, and embracing a collaborative approach to cybersecurity, businesses can better safeguard their SaaS applications and prepare for future security challenges.
Authentik Security provides a comprehensive, scalable solution for SaaS security, offering businesses a powerful tool to help secure their organizations and protect their sensitive data. By choosing a cutting-edge solution like authentik, businesses can stay ahead of the curve and ensure their SaaS applications' ongoing success and security.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

View File

@ -1,79 +0,0 @@
---
title: "We did an authentik hackathon!"
slug: 2023-08-02-we-had-an-authentik-hackathon
authors:
- name: Tana Berry
title: Sr. Technical Content Editor at Authentik Security Inc
url: https://github.com/tanberry
image_url: https://github.com/tanberry.png
tags:
- blog
- hackathon
- docathon
- team work
- code
- authentik
- git
- identity management
- authentication
hide_table_of_contents: false
image: ./image1.jpg
---
<aside>
🎉 Congratulations to all participants and to the Top Three prize winners!
</aside>
The first ever authentik hackathon just wrapped on Sunday, and we had a great time!
A huge thanks to our persistent hackers, who hacked from Wednesday through Sunday, and made some fantastic contributions to [authentik](https://goauthentik.io/). We are already looking forward to the next one (winter 2023, maybe?), and to another round of intense fun with our community members.
![](./image1.jpg)
<!--truncate-->
### Our intrepid participants
We had folks spread across four continents and six time zones! From the US West coast (Pacific time zone) to Texas to Western Africa, Central Europe, Eastern Europe and Mumbai, India, we had a time difference spread of 12 and a half hours. So yes, time zones are, errrrmmm… interesting. But it was actually quite manageable, and thanks to those on the early and late extremes of the hours! While not all of the people who registered actually showed up and hacked with us, those who participated were energized and very ready to contribute to and learn about authentik.
Over the five days, we had regularly scheduled Check-in calls to stay in touch with each other using voice chat on our #hackathon channel in Discord, and throughout the day (and night!) we could check in with each other on team-specific channels or on the main #hackathon channel.
### What we hacked on
Take a look at our [GitHub repo](https://github.com/goauthentik/authentik) for specific Issues and PRs, but the contributions ranged widely, from email configurations to [define allow lists](https://github.com/goauthentik/authentik/pull/6426) (thanks [@sandeep2244](https://github.com/sandeep2244)!), to adding [Kerberos as an authentication protocol](https://github.com/goauthentik/authentik/pull/6391) (kudos to [@mareo](https://github.com/Mareo) and [@rissson](https://github.com/rissson) for this amazing and challenging contribution!), to a wonderfully surprising amount of documentation additions on [managing users](https://github.com/goauthentik/authentik/pull/6420) (way to win, [@Baloc](https://github.com/Baloc)!) and improving the [installation and Beta upgrade docs](https://github.com/goauthentik/authentik/pull/6429) (thank you so much [@richardokonicha](https://github.com/richardokonicha)!!)!
We also had PRs to improve the authentik UI (specifically the [Library page](https://github.com/goauthentik/authentik/pull/6409) of applications), and a detailed, in-depth answer to a question about how to prompt users, via authentiks customizable Flows and Policies and also by using incentivization, to [configure a multi-factor authentication tool](https://github.com/goauthentik/authentik/issues/4571) (thank you [@smileyj](https://github.com/smileyj)!).
### And the top three winners are …
Congrats to the top prize winners:
![Beta icons created by Muhammad Ali - Flaticon](./beta.png) Our contributor @richardokonicha from Nigeria took third place for his hard work on our Beta upgrade docs... which led to improvements on the Docker Compose and the Kubernetes installation documentation. Richard will be forever famous for his insistence on testing the documentation (Every. Step. Of. It.) and for putting on the users hat in order to produce clear, simple instructions.
![kerberos dogs](./dog-ring.png) The Wow Factor Contribution of adding Kerberos as a new authentication protocol came in at second place, and was worked on at a furious pace by two fellow Frenchmen, [@mareo](https://github.com/mareo) and [@rissson](https://github.com/rissson). Dramatic demos and happy tired faces are _de riguer_ at hackathons, and this substantial contribution did not disappoint!
![docs icon](./icon_docs.png) At top place is [@Baloc](https://github.com/baloc), who also hails from France, and added substantial value by contributing procedural docs about user management. The tech docs badly need more How To content, and [@Baloc](https://github.com/baloc) bravely dove right in. We now have shiny new docs for all of the CRUD operations plus a load of reference content about session details, events, tokens, group memberships, MFA authenticators… theres a lot of powerful functionality in authentiks user management, and now we have docs to prove it!
### Oh the [good] drama!
Working closely with others on a project, sharing screens back and forth, with a relatively tight timeline can add a level of excitement and energy. Maybe we were lucky and just happened to have the Worlds Nicest People as our participants, or maybe software folk are just like that, but the energy was always positive and there were always helping hands (and eyes) available if things got sticky.
### Highlights
Some of the best takeaways from the event include:
- Watching others work is fascinating…. we all have our own ways of moving around our IDEs, making quick edits to config files (vi anyone?) and navigating Git repos, but wow the variety of exactly how different people do the same task is amazing. Learning from others is going to happen for sure at a hackathon, and surprisingly you can even learn a bit about yourself and your own work patterns!
- Embrace the rabbit holes! Sure, it can be exhausting for us introverts to interact with people and learn a TON from watching their work styles, and even more exhausting when you realize that your hacking partner is correct, we really _DO_ need to install a K8S cluster so that you can do some testing… but it is also immensely rewarding. A hackathon is the perfect time to give yourself permission to try something new, to spend a long while banging at something, guilt-free, and to put your real-world work responsibilities aside.
- The strength of community, and the ever-fresh wonder of chatting with people from all over the globe, is invigorating. Everyone in software is a builder, and building with others, in cities or locales that you might never have even heard of, is simply amazing, refreshing, and fun.
- authentik does a lot more than even our (relatively new) authentik team knew! (Well, our founder and CTO Jens knew, since he built it…) It was great fun to explore some of the deeper capabilities and functionality of authentik, and to have the original builder there to learn from.
- We had fun and moved fast, but also pushed our discipline to follow the regular authentik build rules (like `make website`), naming standards for our PRs, coding guidelines, etc. A little standardization and rule-following didnt dampen any of the fun, and made things easier when it was time to create PRs and merge our contributions.
### Join us for the next one!
We arent yet sure of the exact schedule, but authentik will definitely have another Hackathon! We will have more great prizes (we know money isnt everything but a little competitive compensation for your time and effort is nice!), and celebrate the camaraderie and contributions.
In the meantime, drop into our repo anytime, look around, and see if there is anything to want to hack on and make a contribution!
See you there, and thank you all for being part of the authentik community.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.5 MiB

View File

@ -1,109 +0,0 @@
---
title: "The tightrope walk of authentication: a balance of convenience and security"
slug: 2023-08-09-the-tightrope-walk-of-authentication
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- financial cost
- SSO
- ethical dilemma
- identity provider
- users
- security
- cyberattack
- authentication
hide_table_of_contents: false
image: ./image1.png
---
In scenarios where security is offered as optional, there's an inherent risk. Customers, particularly those with a limited knowledge of digital security, might not fully comprehend its significance or choose to sidestep these features due to budget constraints. However, these seemingly inconsequential choices can expose users to significant risks. Without proper security measures in place, customers can become vulnerable to security breaches, putting their sensitive data at risk.
This situation raises a pressing question: how do we strike a balance in this landscape that is fair to both users and providers? Ensuring user convenience while maintaining robust security measures is complicated. If we lean too heavily towards convenience, we risk compromising on security. Conversely, an overemphasis on stringent security measures may lead to a complex and off-putting user experience.
![](./image1.png)
<!--truncate-->
## The pitfalls of balancing user convenience and robust security
Finding the sweet spot between these two extremes isn't easy, but it is essential. We need to explore innovative ways to embed authentication measures seamlessly into our products, mindful of the ethical implications of our choices and fostering a culture of security-awareness among users. Weaving the intricate tapestry of authentication requires not only creating robust security measures but also ensuring they are understood, accessible, and effective for all users.
The challenge lies in navigating the intertwined threads of convenience and security in a way that doesn't unravel the overall user experience or compromise the safety of user data. As we continue our journey on the digital tightrope, the quest for answers on crafting a solution that respects the ethical nuances of this process continues.
### Flexibility: tailor-made security measures
In the heart of authentik's approach lies the ability for users to custom-fit their security measures. This is not a one-size-fits-all solution, but rather an opportunity for assessment and customization, in which users can prioritize security aspects that are crucial to their unique circumstances.
By offering such flexible configurations, authentik ensures that the control lies in the users' hands. This personalized approach not only enhances user experience but also contributes to an optimal level of security based on individual needs and understanding.
### Versatility: diverse protocols and services
With the digital world evolving at a lightning pace, the needs and requirements of users are also constantly changing. Authentik supports a multitude of protocols and services, providing a level of versatility that caters to an extensive user base. From businesses with complex corporate structures to research, data and university centers, to individual users seeking straightforward solutions, authentik is equipped to handle almost all environments and protocols.
### Transparency: the open-source advantage
As an open-source identity provider, authentik seeks to unify the divided fronts of convenience and security. Open-source software holds a special place in the tech world due to its emphasis on transparency, collaboration, and community involvement. Being open-source means that authentik's internal workings are visible for everyone to see, review, and contribute to. This transparency builds trust among users and the wider community, assuring them that there are no hidden caveats in their commitment to security.
Moreover, the open-source model promotes continuous improvement and innovation. With the collective intelligence of a global community, security measures can be regularly updated, bugs can be identified and fixed promptly, and new features can be added more quickly. This not only enhances the quality of the software but also ensures that it evolves in line with the latest security threats and user needs.
All in all, authentik's open-source approach embodies a commitment to user empowerment, transparency, and continuous improvement. By tackling the challenges surrounding authentication head-on, it sets a standard for what a user-focused, ethical identity provider can look like.
## Delving deeper: ethical dilemmas in authentication
In the complex landscape of integrating security measures like Single Sign-On (SSO), it's crucial to delve into the ethical dilemmas entangled within this process. The crux of these issues often pivots because when security is proposed as a separate add-on, customers might not grasp its significance or could decide to forgo it, largely due to financial constraints. This seemingly innocuous decision could potentially expose them to data breaches, bringing about severe repercussions. Let's delve into these ethical puzzles in greater detail to shed more light.
### Understanding the importance: an ethical imperative
Firstly, we ask if selling security as a separate entity is morally correct when customers may not fully appreciate its importance. The gravity of this dilemma arises from the fundamental role that comprehension plays in making informed decisions.
The labyrinth of cybersecurity can often be daunting for users, with a multitude of complex terms and concepts to grasp. Selling security separately without providing adequate knowledge can lead to customers underestimating the importance of security measures, leading to decisions that potentially jeopardize their digital safety.
### The cost-vs-security seesaw
Next, we grapple with the ethical dilemma surrounding the balance between cost and security. If security measures are priced high, it might deter customers from taking these necessary precautions. In this situation, are companies indirectly encouraging customers to take risks with their data security?
The cost-security seesaw presents a tricky challenge. On the one hand, quality security measures require resources to develop and maintain, justifying the associated costs. On the other hand, high prices could push users to forgo these measures, leaving their data unprotected and vulnerable.
### Navigating users' lack of knowledge: an ethical responsibility
Lastly, an ethical question surfaces around ignorance towards digital security. If customers choose to forgo security due to a lack of knowledge, the question of blame becomes pertinent.
Is it the responsibility of companies to ensure that their customers are adequately educated about the importance of security? Or should the onus be on the users to equip themselves with the knowledge necessary to protect their digital assets? This dilemma draws attention to the shared responsibility of both parties in maintaining digital security and the ethical implications if one party neglects their duty.
In conclusion, these three ethical dilemmas underline the importance of user education, transparent pricing, and shared responsibility in offering security measures like SSO. By understanding these challenges, companies can better strategize their approach towards selling security measures, ultimately ensuring a safer digital landscape for their users.
## The authentik approach: a case study in ethical authentication
In a world where we often encounter dilemmas about whether security measures like SSO should be integral components of a product or sold separately, authentik aims for a balanced approach. Lets look at how SSO providers in general, and authentik specifically, can handle these questions.
### Flexibility and versatility
Recognizing that every user has unique needs and levels of understanding when it comes to digital security, an SSO should allow users to select and implement the security measures that are most relevant to their circumstances.
Our emphasis on versatility ensures that authentik's platform can adapt to a wide array of situations and user requirements. Rather than pushing a one-size-fits-all security solution, authentik acknowledges that what works for one user may not work for another. This philosophy goes a long way in addressing one of the key ethical dilemmas in authentication: the balance between convenience and security.
### Empowering users through transparency
Another distinguishing aspect of authentik's approach is its commitment to transparency. As an open-source identity provider, authentik allows its users to peek under the hood and gain a clear understanding of how their security measures are implemented. This degree of transparency fosters trust and helps users make informed decisions about their digital security.
This transparency is not only ethical but also empowering. It offers users the information they need to decide about their security, thereby addressing another dilemma: ensuring users understand the importance of security measures.
### Prioritizing user understanding and needs
By giving users the flexibility to choose their own security measures and providing them with a clear understanding of how these measures work, authentik empowers users to take control of their digital security. This user-centric approach is a practical solution to the ethical dilemma of user understanding and need.
## FAQs
- What is the primary ethical dilemma in offering security measures like SSO separately?
- The main ethical issue is that customers may not fully understand the importance of security measures, or they might choose to ignore them due to cost considerations. This could expose them to security breaches.
- How can SSO providers address these ethical dilemmas in authentication?
- authentik allows users to tailor-make their security measures, offers a wide range of protocols and services, and maintains transparency through its open-source nature. This approach empowers users to prioritize their security based on their needs and understanding.
- How can companies strike a balance between security and convenience in authentication?
- Companies can strike a balance by educating customers on the importance of security measures, providing flexible and user-friendly security options, and maintaining transparency in their security practices.
By prioritizing flexibility, versatility, and transparency, SSO providers can build solutions in which users can choose, understand, and control their own security measures. This approach mitigates the risk of misunderstanding or underestimating the importance of such measures, ensuring users aren't left vulnerable to security breaches.
The world of authentication, with all its challenges and dilemmas, isn't as daunting when equipped with the right approach and tools. By focusing on user empowerment, we can ensure that the ethical dilemmas inherent to authentication are addressed and users are both protected and empowered.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 251 KiB

View File

@ -1,150 +0,0 @@
---
title: "Lets make identity fun again (whether we build it or buy it)"
slug: 2023-08-16-lets-make-identity-fun-again
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- build-vs-buy
- SSO
- third-party software
- identity provider
- vendors
- security
- authentication
hide_table_of_contents: false
image: ./image1.jpg
---
Identity whether were talking about internal authentication (think Auth0) or external authentication (think Okta) has become boring.
Little else proves this better than the fact that Okta and Auth0 are now the same company and that their primary competitor, Microsoft AD, survives based on [bundling and momentum](https://goauthentik.io/blog/2023-07-07-Microsoft-has-a-monopoly-on-identity). Identity has become a commodity a component you buy off the shelf, integrate, and ignore.
Of course, taking valuable things for granted isnt always bad. We might regularly drive on roads we dont think much about, for example, but that doesnt make them any less valuable.
The danger with letting identity become boring is that were not engaging in the problem and were letting defaults drive the conversation rather than context-specific needs. Were not engaging in the solution because were not encouraging a true buy vs. build discussion.
> My pitch: Lets make identity fun again. And in doing so, lets think through a better way to decide whether to build or buy software.
[![Image1](./image1.jpg)](https://pixabay.com/users/jplenio-7645255/ "Image by jplenio on pixabay")
<!--truncate-->
## How identity became boring and the big players became defaults
There are one million articles about build vs. buy because its one of those problems that wont go away. Ironically, despite the never-ending discussion, there tends to be a firm anchor: build the features that differentiate your product and build everything else.
Jeff Lawson, co-founder and CEO of Twilio captured this well in his book _Ask Your Developer_, writing that “My rule of thumb is that for anything that gives you differentiation with customers, you should build. Software that faces your customers, you should build.”
Within this framework, identity almost inevitably appears to be the perfect example of buying instead of building. When has a login screen ever made one product stand out from another? When has a user ever said, “The product is good but the authentication process brought me joy”?
Its easy, obvious, and from within this framework correct to buy your identity feature. Identity isnt unique here but the strength of the consensus around buying instead of building is striking.
Small startups, on one end of the spectrum, tend to strictly follow the rule of thumb above. Along the way to product/market fit, and often well after it, startups find it worthwhile to invest almost everything into the bleeding edge features that will wedge them into the market.
Identity is an early requirement they often want to sweep away. Identity becomes a commodity to buy and the defaults usually Okta and Auth0 feel obvious.
Enterprises, on the other end of the spectrum, tend to be swamped with bureaucracy and overwhelmed by internal and external demands. Enterprises tend to need extensive feature coverage, multitudes of integrations, and always-on customer support. From this perspective, defaults appear attractive and Microsoft AD becomes compelling.
> Across the spectrum, identity has developed the reputation of being a boring problem with a commodity solution.
If companies were aware of the tradeoffs that come from choosing the default path, we wouldnt be having this conversation. But for many companies, the default feels like a standard, and all the non-standard paths are obscured.
## Build vs. buy and its extremes
The “build your core; buy everything else” framework feels authoritative because its logic is built on logic we dont do a good job of questioning.
Is there actually always great software to buy? Is build vs. buy a black-and-white decision? Do we actually have a good understanding of differentiation?
No, no, and also no.
### Third-party software is a market for lemons
Lawsons rule of thumb implicitly relies on an idea in economics called the [efficient-market hypothesis](https://en.wikipedia.org/wiki/Efficient-market_hypothesis). The basic idea is this: assets and asset prices reflect all available information.
There are decades of economists arguing back and forth on the accuracy of this idea, that all data points are brought to the table. But in a cultural and business context, its been honed into a simple assumption: the market incentivizes identifying and solving problems and over time, for the most part, the best possible solution is available at any given time.
But theres a competing theory that explains the software procurement process better: the [market for lemons](https://en.wikipedia.org/wiki/The_Market_for_Lemons) concept. The core argument is that in a market with information asymmetry between buyers and sellers, the quality of the products can degrade and buyers can end up with defective products (lemons).
Third-party software procurement is often surprisingly inefficient and when you think about your actual experiences purchasing software or using purchased software, youll likely remember a lot of lemons (even if few software vendors are actually like a used car salesperson).
Dan Luu has a great article on the topic called [Why is it so hard to buy things that work well?](https://danluu.com/nothing-works/) He writes that companies, in principle, should be able to outsource work outside their core competencies but that those who do, in his experience, “have been very unhappy with the results compared to what they can get by hiring dedicated engineers.”
This disappointment applies in absolute terms (meaning the product might not be as good as promised or that support isnt efficient at making it work for you) and in financial terms (meaning large contracts can often end up costing more than the salaries of the engineers you otherwise would have hired).
Examples abound, including a product that was supposed to sync data from Postgres to Snowflake that ultimately lost data, duplicated data, and corrupted data. Theres also Cloudflare Access, [named by Waves then-CTO Ben Kuhn](https://twitter.com/benskuhn/status/1382325921311563779?s=20), that came with a product-breaking login problem that the Cloudflare support team misinterpreted before escalating to an engineering team, who “declared it working as intended.”
The market doesnt need to exclusively comprise lemons to be a market of lemons; the information asymmetry just needs to be imbalanced enough, consistently enough, that the typical buy vs. build framework doesnt work.
The primary benefit of the API economy, in theory, was the rise of hyper-specialized services built by hyper-specialized engineers.
But theres a downside: If no one knows more about payment processing than Stripe, then how can other engineers adequately evaluate the options? And that doesnt just apply to sheer functionality. In-house engineers are likely going to struggle to evaluate the quality of the integrations and the amount of support necessary and available too.
Consensus provides little relief. As Dan writes, “Even after selecting the consensus best product in the space from the leading (as in largest and most respected) firm, and using the main offering the company has, the product often not only doesn't work but, by design, can't work.”
## Buy vs. build as a false dichotomy
The build vs. buy framework often fails because the “vs.” implies a black-and-white comparison between building software from scratch and buying vendor software thats effectively a black box.
Once you decide you only need a commodity feature, you start to treat the feature as a solved problem thats solved by an efficient market. And once you assume that, the consensus default becomes the obvious choice.
There are two misconceptions buried in the false dichotomy:
- To _build instead of buy_ is to build from scratch.
- To _buy instead of build_ is to get a complete solution in one package.
In the first misconception, we tend to treat the process of building software as building from the ground up. Building tends to get associated with wastefulness or over-indulgence. This is a shallow way to think about this option, however, considering how many ways you can adopt open source components, buy component parts you can build with and adapt, or build on extensible tools and platforms.
And when you purchase component parts from smaller vendors instead of buying “complete” packages from large vendors, you often get to work more closely with the vendor and shape the product in a way that works for you (and for other customers like you). A vendor option can then be customizable out-of-the-box and customizable long-term as you work alongside the vendor.
> Customization of the log in and authentication workflow, using our editable flows, stages, and UI elements, is a core out-of-the-box feature of [authentik](https://goauthentik.io/).
In the second misconception, we tend to assume that the offered solution is complete and that buying a product merely involves breaking out the company credit card. That might be what the vendors pitch but more often than not, there are significant costs to maintenance and integration. Duncan Greenberg, Vice President at Oscar Health, has [argued](https://medium.com/oscar-tech/the-many-pitfalls-of-build-vs-buy-5364f49a4fed) that “The choice is also often better seen as buy and maintain or buy and integrate” because, he writes, “Some amount of building is always required.”
And while some of these costs can be written off as short-term, others linger. Will Larson, CTO at Carta, [writes that risks include](https://lethain.com/build-vs-buy/) “the vendor going out of business, shifting their pricing in a way thats incompatible with your usage, suffering a severe security breach that makes you decide to stop working with them, or canceling the business line.”
Even open-source and open-core components can pose this danger. Consider HashiCorps recent [controversial licensing change](https://twitter.com/HashiCorp/status/1689733106813562880?s=20).
### Differentiation is not always obvious
Finally, one of the most misleading aspects of the typical buy vs. build framework is how it leans on an idea thats hard to define: differentiation.
In the original framework, startups are supposed to build only the features that differentiate their products from other products or that otherwise make them stand out and feel valuable to their target customers.
Theres a compelling logic to this because it often makes sense to devote most of your resources to a single opportunity instead of spreading yourself thin. When you start to achieve product/market fit, the market “[pulls product out of the startup](<https://a16z.com/2017/02/18/12-things-about-product-market-fit-2/#:~:text=The%20question%20then%20is%3A%20who,organically%20(i.e.%2C%20without%20any%20advertising)>).” And when that happens, it makes sense to work in that direction rather than distracting yourself with other tasks.
The trouble is that the directive to build customer-facing features isnt always a good framework. Netflix, for example, built the whole idea of chaos engineering because they couldnt buy the kind of resilience they needed. Customers would benefit but most wouldnt even notice; still, they built.
This is another way the efficient market hypothesis can lead us astray.
Sometimes, even industry-leading vendors arent a good fit. They might be missing features you need; they might charge exorbitant prices for your usage levels or for essential features like SSO; and they might not be iterating fast enough to keep up with changing demands.
The more carefully you think not only about _what_ differentiates you but _how_ you can make [X feature] into something that differentiates you, the more youll find reasons to build.
## Security is 90% execution and 10% innovation
One of the best reasons to buy software is because a vendor is naturally incentivized to iterate, innovate, and keep up with a changing market (a market that likely isnt yours but may feed into yours).
It would be obviously foolish, for example, to try building your own LLM instead of working with OpenAI. Theyre already far ahead and theyve built a machine for staying ahead and going faster.
This dynamic, however, isnt true across many markets. Unlike AI, where most of the market feels like whitespace, modern security concerns are fairly well mapped out. There are many issues, of course, and many gaps in the market remain, but there arent many paradigm shifts on the horizon nor problem areas that still require pioneers.
> Were not doing brain surgery, in other words; were matching prescriptions to known diagnoses.
In security, where the typical build vs. buy framework perhaps works the least well, security teams can turn into pilots and drivers of tools instead of engineers. Adrian Sanabria, Director of Product Marketing at Valence Security, [explains that many security teams](https://medium.com/@sawaba/when-to-purchase-a-solution-to-your-cybersecurity-problem-86de1fa203ba) have “mistaken a bill of goods for a security program.” In the process, he writes, there become “entire security teams that are little more than babysitters for a particular product the company owns.”
And this is where the opportunity to build (or customize) instead of buy exists. In a mature industry, where standards are stable and most problems have at least broad solutions, it often makes more sense to build a feature in-house so that you can execute it as well as possible.
There comes a point where the creation and implementation remaining to be done is best done by the people closest to the precise problem in its exact context. In the security industry, success depends more on a granular understanding of the problem than sheer innovation.
## Lets make identity fun again
We can make identity as well as many similar problems fun again. We can resist defaulting to industry leaders and insist on building custom solutions or building on top of products that will grow with us.
The goal isnt to flip our defaults and start building everything from scratch. The goal is to reexamine our building and buying criteria and recontextualize them in our industries, use cases, and particular needs. The more we do so, the more well find that building is a better path than we might have guessed.
And even if building isnt the right option, reconsidering our choices and our decision criteria will help us figure out how to search for better vendors and how to fully evaluate them.
For years, weve tried to avoid building as much as possible and its an extreme that is worth resisting or at least questioning. But as someone whos building around identity every day, I can assure you its more fun than youd guess and more rewarding both for you and your users.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.3 MiB

View File

@ -1,67 +0,0 @@
---
title: "My hobby became my job, 50% extra pay, just needed to let go of GPLv3"
slug: 2023-08-23-my-hobby-became-my-job
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- founder
- SSO
- open source
- identity provider
- licensing
- gpl
- mit
- security
- authentication
hide_table_of_contents: false
image: ./image1.jpg
---
Theres been a lot of discussion about licensing in the news, with [Red Hat](https://www.redhat.com/en/blog/furthering-evolution-centos-stream) and now [Hashicorp](https://www.hashicorp.com/blog/hashicorp-adopts-business-source-license) notably adjusting their licensing models to be more “business friendly,” and [Codecov](https://blog.sentry.io/lets-talk-about-open-source/) (proudly, and mistakenly) [pronouncing](https://about.codecov.io/blog/codecov-is-now-open-source/) they are now “open source.”
“Like the rest of them, they have redefined Open as in Open for business”—[jquast on Hacker News](https://news.ycombinator.com/item?id=37021360)
This is a common tension when youre building commercially on top of open source, so I wanted to share some reflections from my own experience of going from MIT, to GPL, back to MIT.
!["Photo by <a href="https://unsplash.com/@gcalebjones?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Caleb Jones</a> on <a href="https://unsplash.com/photos/J3JMyXWQHXU?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>"](./image1.jpg)
<!--truncate-->
I started working on the project that led to [authentik](https://github.com/goauthentik/authentik) when I was 20. My original vision was a single pane of glass for emails, domains, applications, hosting, and so on. This was overly ambitious for one person and their hobby project, and I ended up spending most of my time on the SSO part. This became its own project: Passbook (later [renamed to authentik](https://github.com/goauthentik/authentik/pull/361) due to a [naming conflict](https://techcrunch.com/2015/06/08/apple-rebrands-passbook-to-wallet/)).
Initially, authentik used the MIT license. When [Elastic called out AWS](https://www.elastic.co/blog/why-license-change-aws) for trademark abuse (offering Elasticsearch as an AWS service without collaborating with Elastic), I [changed it to GPLv3](https://github.com/goauthentik/authentik/commit/4671d4afb4d32988ca0058a33888862bd9652b16) because I didnt like what AWS did in principle, and didnt want it to happen to authentik.
# An opportunity, and a compromise
Two years later, [Sid](https://www.linkedin.com/in/sijbrandij/) at [Open Core Ventures](https://opencoreventures.com/) (OCV) contacted me about [creating a company](../2022-11-02-the-next-step-for-authentik/item.md), building on the features and functionality of authentik. It was a dream opportunity: work full time on my hobby project and make 25% more in the process. But I had to let go of the GPL license.
With an open core model customers are usually using code from both the open source and proprietary codebases. This necessitates a dual license structure, meaning customers need to accept both licenses.
The drawback of building commercially on top of open source software using GPL is that the copyleft aspect can put some people off. Not every person or business wants to have to expose their code for every minor change or bug fix they may add, and they will sooner find a competitor with a more permissive license than adopt your software. This is obviously not ideal when youre trying to get traction and grow a business.
OCV proposed we switch back to MIT.
# Considerations and tradeoffs
I was very conflicted about reverting to MIT because we had chosen GPL for a reason, but the circumstances had changed. As a company and a real legal entity, we would have recourse if something like AWS/Elasticsearch were to happen—it wouldnt just be me trying to defend myself while also doing my day job. The decision forced me to reflect on what it means to build a company on top of an existing open source project.
For me, it was an opportunity to work full time on a passion project, with more resources to invest in building and maintaining the open core of the project. The opportunity came with tradeoffs to be made, and a responsibility to be a good steward of the open source project.
I know how volatile startups can be. I had put so much time into authentik already, and my biggest concern was around what happens if things dont work out. I wanted to make sure that the open source version stays free, vibrant, and open for use by all.
## A license isnt the only way to guarantee good behavior
With a permissive license, the risk of [bait and switch](https://opencoreventures.com/blog/2022-10-preventing-the-bait-and-switch-open-core/) is always there. A commercial company needs to become profitable and there is precedent for changing to more limited licenses when it suits the business. People naturally see this as a dichotomy: you either have a copyleft license and therefore your intentions are enshrined in the license, or a permissive one and cant be trusted to uphold open source ideals.
There is a third path though, which is the route we eventually took with [Authentik Security](https://goauthentik.io/), the company we were building on top of the project. We incorporated as a public benefit company, which means that we are legally bound by the terms in the [OCV Public Benefit Company Charter](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md). This includes commitments to keeping open source products open source, and ensuring the majority of new features added in a calendar year are made available under an open source license. Being a public benefit company means we are still held accountable, just through a different mechanism than the license.
# The process of changing the license
Changing licenses is a sensitive issue. I consulted with the top contributors to authentik to hear their feedback while we were in the process of setting up Authentik Security. Nobody objected, so we [switched back to MIT](https://github.com/goauthentik/authentik/commit/47132faffbac1098dadba73435164e655901e9e7) and announced the change in the [company announcement post](https://goauthentik.io/blog/2022-11-02-the-next-step-for-authentik). I think I was surprised there wasnt a backlash or accusations of putting profit over principle (we have all seen [how](https://news.ycombinator.com/item?id=37081306) [impassioned](https://news.ycombinator.com/item?id=36971490) [people](https://news.ycombinator.com/item?id=37003489) [get](https://news.ycombinator.com/item?id=36990036) about open source and ideals). I like to think that people saw the pragmatism in the decision: that MIT lets us further the work of authentik.
# Reflections
While a copyleft license is one way to hold companies accountable to upholding the principles of open source, with Authentik Security we struck a balance between commercial viability with the more permissive MIT license and the values I wanted to entrench with becoming a Public Benefit Company. I now get to work full time on my hobby, and the core of authentik is still open source.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

View File

@ -1,56 +0,0 @@
---
title: Announcing the authentik Enterprise release!
slug: 2023-08-31-announcing-the-authentik-enterprise-release
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- founder
- SSO
- open source
- community
- identity provider
- enterprise
- support
- help-center
- security
- authentication
hide_table_of_contents: false
image: ./image1.png
---
📣 We are happy to announce that the first authentik Enterprise release is here! 🎉
The Enterprise release of authentik provides all of the functionality that we have spent years building in our open source product, plus dedicated support and account management.
This Enterprise version is available in Preview mode in our latest release, 2023.8.
This is an exciting step for us, as we grow the team and the company and our user base. We officially became a company just last fall (I wrote about it in November 2022, in “[The next step for authentik"](../2022-11-02-the-next-step-for-authentik/item.md)), and this release is another move forwards in maturing authentik into the SSO and identity management app of choice.
One thing we want to acknowledge, up front, is that we would never have been able to achieve this goal without the years of support from our open source community. You all helped build authentik into what it is today, and thats why all of our Enterprise-level features will be open core and source available, always.
![](./image1.png)
<!--truncate-->
To upgrade and get going with the Enterprise version, refer to our documentation for instructions for your deployment:
- [Docker Compose installation](../docs/installation/docker-compose)
- [Kubernetes installation](../docs/installation/kubernetes)
Keeping it simple, we made sure that installing and upgrading authentik is exactly the same process for both Enterprise version and our free open source version.
With this first Enterprise release, dedicated support is the feature; this version provides access to our Support center where you can open tickets, view tickets and their progress, and ask questions about your Enterprise product.
For our open source community, we will continue to engage in the robust conversations and problem-solving, as always, in our Discord server. These conversation and community collaboration are the heart and soul of authentik… we learn from everyone, and we will always be active and responsive there within our community.
Check out our Enterprise documentation for information about creating and managing your organization, purchasing and activating a license, accessing support, and managing billing and organization members.
- [Get started with Enterprise](../docs/enterprise/get-started)
- [Manage you Enterprise account](../docs/enterprise/manage-enterprise)
- [Support for Enterprise accounts](../docs/enterprise/entsupport)
In future releases, we will be adding additional Enterprise features, including RBAC support, inbuilt remote desktop access, and an authentik mobile app for multi-factor authentication.
For this preview release of authentik Enterprise, wed like to hear from you; thoughts and suggestions, questions, any specific direction that youd like to see the Enterprise version focus on? Contact us at [hello@goauthentik.io](mailto:hello@goauthentik.io).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.6 MiB

View File

@ -1,83 +0,0 @@
---
title: "Sourcegraph security incident: the good, the bad, and the dangers of access tokens"
slug: 2023-08-11-sourcegraph-security-incident
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- Sourcegraph
- token
- transparency
- identity provider
- leaks
- breach
- cybersecurity
- security
- authentication
hide_table_of_contents: false
image: ./image1.jpg
---
Access tokens make identity management and authentication relatively painless for our end-users. But, like anything to do with access, tokens also can be fraught with risk and abuse.
The recent [announcement](https://about.sourcegraph.com/blog/security-update-august-2023) from Sourcegraph that their platform had been penetrated by a malicious hacker using a leaked access token is a classic example of this balance of tokens being great… until they are in the wrong hands.
This incident prompts all of us in the software industry to take yet another look at how our security around user identity and access can be best handled, to see if there are lessons to be learned and improvements to be made. These closer looks are not only at how our own software and users utilizes (and protects) access tokens, but also in how such incidents are caught, mitigated, and communicated.
![Photo by <a href="https://unsplash.com/@juvnsky?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Anton Maksimov 5642.su</a> on <a href="https://unsplash.com/photos/wrkNQmhmdvY?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>](./image1.jpg)
<!--truncate-->
## What happened at Sourcegraph
The behavior of the malicious hacker after they accessed the platform reveal a fairly typical pattern: access the system, gain additional rights by creating new user accounts, switching accounts to fully probe the system, and finally, inviting other malicious actors in through the breach. Unfortunately, it is usually that last step, not the first, that sets off alarm bells.
Lets take a look at what occurred at Sourcegraph.
On July 14, 2023, an engineer at Sourcegraph created a PR and committed a code change to GitHub that contained an active site-admin access token. This level of access token had privileges to not only view but also edit user account information.
For the next two weeks, the leak seems to have remained undetected, but on Aug 28 a new account was created, apparently by the hacker-to-be, and on Aug 30th the hacker used the leaked token to grant their account admin-level privileges, thereby gaining access to the Admin dashboard.
On the dashboard, the hacker was able to see the first 20 accounts displayed, along with the license keys for each account. Sourcegraph did [state](https://www.securityweek.com/sourcegraph-discloses-data-breach-following-access-token-leak/) that possession of the license key did not allow for access to each accounts Sourcegraph instance, fortunately.
However, the intruder didnt stop with seeing the license keys; they went on to create a proxy app that allowed any users of the app to access Sourcegraphs APIs for free. Instructions on how to use the app were widely circulated on the internet, with almost 2 million views.
> “_Users were instructed to create free Sourcegraph.com accounts, generate access tokens, and then request the malicious user to greatly increase their rate limit._” ([source](https://about.sourcegraph.com/blog/security-update-august-2023))
The subsequent spike in API usage is what alerted the Sourcegraph security team to a problem, the very same day, August 30, 2023. The team identified the hackers site-admin account, closed the account and then began an investigation and mitigation process.
One significant detail is how the malicious hacker obtained the access token in the first place: from a commit made to the Sourcegraph repository on GitHub. Its unlikely we will ever know how the token was included in the commit. What we do know is that shortly after the breach was announced a [PR](https://github.com/sourcegraph/sourcegraph/pull/56363) was opened to remove from the Sourcegraph documentation instructions about hardcodong access tokens .
Most companies have serious checks in their automated build processes, and it sounds like Sourcegraph did have some checks in place, but it didnt catch the exposure of this access token in the commit. Back to the statement about these types of incidents causing us all to look again, more closely, at our practices; here at Authentik Security we do indeed have a very robust set of checks in place as part of our required CI/CD pipeline, and we use [Semgrep](https://github.com/returntocorp/semgrep) to search for tokens and other artifacts that we not want to expose. With Semgrep, you can write a custom rule to look for an exact token schema, so that no matter what type of tokens you use, their presence in the code base can be discovered.
## Best practice around tokens
Access tokens have for decades been an essential artifact used in application systems to efficiently and securely manage authentication. They are not going away anytime soon. The onus is on the software companies, and their security engineers, to optimize the protection of access tokens.
The best known best practice around access tokens is to make sure that they have a very short shelf-life; they should expire and be unusable within minutes, not hours or days. This is standard practice. In authentik, by default we set the expiration for access tokens at 5 minutes, and we use JWT (JSON Web Tokens) for added security. We blogged about this recently, have a [read](https://goauthentik.io/blog/2023-03-30-JWT-a-token-that-changed-how-we-see-identity).
Of course, there are also refresh tokens to be considered, and protected. There also needs to be strong security around refresh tokens, because they can be used to create new access tokens. Refresh tokens are typically never passed externally, and if the authorization server is a different one than the application server, then the application server will not even see refresh tokens (only short-lived access tokens). Note that this would not have helped in the Sourcegraph incident, since the malicious hacker had admin-level access, and thus had access to the secure cookie with the refresh token.
## Security breaches are inevitable
Constant effort is required to stay ahead of malicious hackers, and we cant always, not every time. Beyond specific best practices for tokens, security teams can focus on building a company culture that includes an in-depth defense strategy that use encryption for tokens (and other sensitive values) in transit and at rest. Other basic, low-hanging fruit in a solid security plan include purposeful secrets management, granting the “least privilege” needed, and implementing SCA (_software composition analysis_) tooling.
However if a security breach does occur, its very important (on many levels) how the hacked company responds to the incident. And the very first part of the response is the _acknowledgement_ that a breach occurred. This act alone, of announcing what happened, when, how, who was impacted, and what the mitigation plans are is absolutely crucial.
Sourcegraph did a great job here; they let us know the same day they knew, and they shared as many details as possible.
> Transparency about the discovery and all the gory details of the breach is vital; it rebuilds trust with users.
Could the breach have been prevented? Sure, of course, on several fronts. The leaked access token should have been found and removed from the code _before_ the commit was made, thus never even available in GitHub repository. Or even if it got into the code base on the repo, a subsequent Semgrep analysis could have caught it, and the token revoked and removed. As it was, two weeks passed with the token sitting there, in public view, before a malicious hacker found and used it.
However, another thing that Sourcegraph got right was their internal architecture and security practices; the fact that they did not store all of the data in one place prevented the intruder from going very deep.
> Sourcegraph [stated](https://about.sourcegraph.com/blog/security-update-august-2023) “Customer private data and code resides in isolated environments and were therefore not impacted by this event.**”**
Sourcegraph was clear and open about exactly who was impacted, and exactly how they were impacted. For open source users it was email addresses. For paid customers, the malicious user could only view the first 20 license key items on the admin dashboard page, and the license keys did not provide access to the users' instances.
## Lessons learned, by all of us
In hindsight, its easy to comment on how SourceGraph handled this breach, what they did right and where they could have done better. But the truth is, that with every security incident, ever leaked token, every malicious hack, we all learn new ways to strengthen our security. Hopefully we also continue to learn the importance of transparency, rapid acknowledgement, and full disclosure about the breaches that do, nonetheless, occur.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

View File

@ -1,138 +0,0 @@
---
title: Black box security software cant keep up with open source
description: "There will always be bugs and vulnerabilities in software. Accepting that, which distribution model gives you more confidence and flexibility?"
slug: 2023-09-14-black-box-security-software-cant-keep-up-with-open-source
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- open core
- SSO
- open source
- community
- identity provider
- enterprise
- source available
- closed source
- security
- authentication
hide_table_of_contents: false
image: ./image1.jpg
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
Legacy security vendors that rely on black box development can't keep up with open source. It's an oft-discussed topic—the ability of open source communities to quickly jump in and collectively solve problems and innovate solutions—but it is equally believed that "serious" security software companies have proprietary software.
In this blog, we will take a closer look at the pros and cons of the various source availability types of SSO and other security software.
!["mike-kononov-lFv0V3_2H6s-unsplash.jpg"](./image1.jpg)
<!--truncate-->
Since were going to use these terms a lot in our discussion, some definitions first:
| | |
| ---------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| Open source | Code that is free to inspect, use, modify, and distribute |
| Closed source | Code that is proprietary and not publicly available |
| Open core | A business model based on a core codebase thats open source (the "open core"), with licensed, proprietary features built on top |
| Source available | Code that is publicly visible, but must be licensed to use, modify, or distribute (e.g. the proprietary code of an open core company) |
## Why do people choose closed vs open source?
### Security through obscurity
“Walled garden” security software relies on keeping the code secret, which _can_ make it harder for hackers to exploit. Some open source skeptics say that transparency makes the code more vulnerable: bad actors can inspect and modify open source code without having to dig into binary code or reverse engineer anything.
However, with closed source solutions, youre completely reliant on the vendor having robust security practices—both before and during the event of a critical vulnerability. The technology landscape shifts so quickly and your possible attack surface grows constantly, so it can be a tall order for teams working on proprietary software to keep up with innovation. Closed source software is still vulnerable to zero-day attacks or exploitation of systems that havent yet applied a security patch.
### Getting ahead of vulnerabilities
Bug bounty programs are one way for closed source security vendors to preempt exploitation, but the prizes need to be sufficiently compelling. Bad actors can still choose to disclose their findings to the highest bidder instead of the vendor.
At least with open source projects, on balance there are likely to be more good actors actively working with and on the code, or ready to respond to Common Vulnerabilities and Exposures (CVEs).
### Rapid response
If youre using a closed source solution, you have to wait for the vendor to tell you what to do in the event of a major vulnerability. In the meantime, you just stop using potentially affected parts of your system until they can communicate the impact and how to remediate.
With open source, you have the benefit of a community working together towards the same goal. In a breach, you dont have to wait around for a vendor to act: you can get patches from the upstream project or hotfix the issue yourself (in the case of smaller open source projects which might be slower to respond).
> Average time-to-fix (TTF) vulnerabilities is now actually faster for open source projects than proprietary software (see Snyks [State of Open Source Security Report 2023](https://go.snyk.io/state-of-open-source-security-report-2023-dwn-typ.html)).
## Compliance
Sometimes the choice of closed source has little to do with whether or not the source code is public, and more to do with the requirements of governing bodies and auditors. Its easier to sell a legacy proprietary solution to stakeholders (in the vein of “Nobody ever got fired for buying IBM”) because they check the right boxes and satisfy compliance requirements. For some organizations, requirements dictate that you need a contract with a vendor rather than relying on an unsupported, community-driven service. Open core solutions can help to fill this gap (which well go into under Support and accountability below).
### Open source projects can have certifications too
Not all open source projects have the time and resources to invest in certifications, but some are pursuing these to make it easier for their solution to be approved for use. At Authentik Security, were currently working towards an [ISO/ISE 27001](https://www.iso.org/standard/27001) for authentik, the open source project.
### Certifications dont _guarantee_ better security
Certifications dont cover all possible paths to exploitation. Plenty of the major data breaches of the past decade ([Okta](https://www.forbes.com/sites/thomasbrewster/2022/03/23/okta-hack-exposes-a-huge-hole-in-tech-giant-security/), [Experian](https://krebsonsecurity.com/2023/01/experian-glitch-exposing-credit-files-lasted-47-days/), [T-Mobile](https://www.t-mobile.com/news/business/customer-information)) were targeted at the type of large enterprises that likely have every possible security certification, yet they were still hacked. Simply proving that a third party verified that youre taking _some_ steps to safeguard some data isnt enough. As the saying goes, the defender needs to win every time, but the attacker only needs to win once.
With [supply chain attacks](https://goauthentik.io/blog/2023-04-07-supply-chain-attacks-what-we-can-all-do-better) becoming more common, you can better understand the provenance of open source code, because you have visibility into dependencies and can validate whether the project is using security tools like Static Composition Analysis (SCA), static or dynamic application security testing (SAST/DAST), multi-factor authentication, etc.
## Support and accountability
> “... big corporations want a neck to choke when things go wrong and Linus is hard to track down” — [steppinraz0r on reddit](https://www.reddit.com/r/cybersecurity/comments/15c3h0q/told_by_a_senior_programmer_that_open_source/jtz0yzx/)
Having a security vendor means accountability: formal support for implementation, bugs, and vulnerabilities. When choosing open source, you do have to consider whether you have the in-house expertise for management and maintenance. Or how confident are you in community support?
There are some legitimate concerns to raise with closed source support though. Some vendors outsource technical support to a third party, which may or may not be vetted (as in the [Okta breach of January 2022](https://www.forbes.com/sites/thomasbrewster/2022/03/23/okta-hack-exposes-a-huge-hole-in-tech-giant-security/)). And, as we saw above, [open source projects actually beat closed source vendors on TTF](https://go.snyk.io/state-of-open-source-security-report-2023-dwn-typ.html).
Security, authentication, and identity management are mission-critical services. For most companies, its wiser to be able to run and manage these in house. Again, open core can provide a happy medium solution, as you get:
- The visibility and transparency of open source
- Total flexibility and modifiability over the open source core
- A contract with a company who is actively contributing to and improving the product, and
- Support for setup and remediation (we just launched dedicated [support for Authentik Security Enterprise](https://goauthentik.io/blog/2023-08-31-announcing-the-authentik-enterprise-release)!)
# Neither open nor closed source is _inherently_ more secure
> “The idea that software is inherently safer because its released under an open source license is overly simplistic in the extreme. Just the most obvious reason for this is that opportunity for independent review doesn't guarantee that review will happen. The wisdom of the crowd doesnt guarantee a third-party review will be better or more thorough than a solid first-party system. Open source provides the possibility of review, and thats all. Hypothetical eyes make no bugs shallow.” — [godel_unicode on Hacker News](https://news.ycombinator.com/item?id=12284600)
Open source is not a silver bullet for security. The code may be open for inspection, but that doesnt mean that people are actively examining the code for vulnerabilities.
> “There is evidence that the people who have access to open source are more active in creating new code and extensions than auditing existing code. [One recent study](https://www.darkreading.com/application-security/open-source-developers-still-not-interested-in-secure-coding) showed that OSS developers spent less than 3% of their time working on improving security.” — Eugene H. Spafford, Josiah Dykstra, Leigh Metcalf, [What is Cybersecurity?](https://www.informit.com/articles/article.aspx?p=3172442&seqNum=9)
On the other hand, while closed source code may be hidden and has dedicated teams actively working to secure it, reverse engineering is still possible.
With open source, you also have greater flexibility to avoid vendor lock-in if youre not comfortable with a vendors choices. A recent [DEFCON talk](https://github.com/nyxgeek/track_the_planet/blob/main/nyxgeek_Track_the_Planet_2023.08.14.pdf) shared a user enumeration security risk in Microsoft Azure, which Microsoft did not deem a vulnerability. If you use Azure and dont want to take that risk, your only option is to switch providers, which can be an onerous change.
With open source, you can fork the project. This can also be true for tools with an open core model: depending on the license for the proprietary edition you may still be able to modify the code.
# Can you trust (but verify)?
There will always be bugs and vulnerabilities in software, whatever the distribution model. Accepting that, which model gives you more confidence?
Whatever solution you choose (whether its for authentication, authorization, or scanning), you need to trust that your security vendor will be honest and practice _responsible disclosure_.
The Okta breach eroded trust and reminded us of some critical considerations:
### Do you trust your vendors supply chain?
If youre entrusting a vendor with a mission-critical, sensitive service like authentication, you are also putting your trust in every vendor they choose to work with (which you may not have visibility into).
### Can you expect your vendor to be transparent?
Closed source vendors will optimize for different things when facing a security risk or vulnerability. They must mitigate for their customers as well as considering factors like protecting their reputation. They have to balance damage control with transparency (do they disclose immediately, even before theyre sure of the extent of customers affected?).
Open source projects can also suffer reputation damage. However its harder to hide vulnerabilities in public code, and the culture of transparency in open source communities is also an incentive that helps to hold open source vendors accountable.
These factors make it hard to take closed source vendors at their word. With open source code (and some source available solutions, depending on the license), you have the reassurance of being able to:
- Validate what the code does and how it does it
- Know what developments are being made
- Modify the code yourself
- For greatest confidence and control, [self host](https://goauthentik.io/blog/2023-01-24-saas-should-not-be-the-default)
For mission-critical services like authentication and identity management, you dont want to be beholden to a third party to be transparent and act quickly in the event of a CVE. Using security tools that build on open source gives you the most visibility and the flexibility.
Authentik Security offers both an open source version and a source available version of our flagship product, [authentik](https://goauthentik.io/). Either way, we don't ever give you a black box.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

View File

@ -1,138 +0,0 @@
---
title: "Machine-to-machine communication in authentik"
slug: 2023-09-26-machine-to-machine-communication-in-authentik
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- machine-to-machine
- M2M
- SSO
- open source
- identity provider
- security
- authentication
- Docker
- Kubernetes
- Loki
hide_table_of_contents: false
image: ./Image1.png
---
> **_authentik is a unified identity platform that helps with all of your authentication needs, replacing Okta, Active Directory, Auth0, and more. Building on the open-source project, Authentik Security Inc is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) that provides additional features and dedicated support._**
---
We have provided M2M communication in authentik for the past year, and in this blog we want to share some more information about how it works in authentik, and take a look at three use cases.
## What is M2M?
Broadly speaking, M2M communication is the process by which machines (devices, laptops, servers, smart appliances, or more precisely the client interface of any thing that can be digitally communicated with) exchange data. Machine-to-machine communication is an important component of IoT, the Internet of Things; M2M is how all of the “things” communicate. So M2M is more about the communication between the devices, while IoT is the larger, more complex, overarching technology.
Interestingly, M2M is also implemented as a communication process between business systems, such as banking services, or payroll workflows. One of the first fields to heavily utilize M2M is the [oil and gas industry](https://blog.orbcomm.com/onshore-to-offshore-how-m2m-is-changing-oil-gas-world/); everything from monitoring the production (volume, pressure, etc.) of gas wells, to tracking fleets of trucks and sea vessels, to the health of pipelines can be done using M2M communication.
Financial systems, analytics, really any work that involves multi-machine data processing, can be optimized using M2M.
> “Machine to machine systems are the key to reliable data processing with near to zero errors” ([source](https://dataconomy.com/2023/07/14/what-is-machine-to-machine-m2m/))
Where there is communication in software systems, there is both authentication and authorization. The basic definition of the terms is that _authentication_ is about assessing and verifying WHO (the person, device, thing) is involved, while **_authorization_** is about what access rights that person or device has. So we choose to use the phrase “machine-to-machine communication” in order to capture both of those important aspects.
> Or we could use fun terms like **AuthN** (authentication) and **AuthZ** (authorization).
So in some ways you can think of M2M as being like an internal API, with data (tokens and keys and certs and all thing access-related) being passed back and forth, but specifically for authentication and authorization processes.
!["Screenshot of authentik UI"](./Image1.png)
<!--truncate-->
## M2M communication in authentik
As part of our providing a unified platform for authentication, authentik supports OAuth2-based M2M communication. By “unified platform” we mean that authentik provides workplace authentication for team members, B2C login by web site visitors, global communities and non-profit teams, educational societies, and [coming soon] mobile authentication. So that all authentications needs are met by authentik, as a unified platform.
### Use cases for M2M in authentik
Macine-to-machine communication speeds processing and adds a layer of security to inter-application and complex, multi-machine systems. With authentiks M2M functionality, you can take advantage of these aspects, and optimize your workflow for authentication and authorization between servers, applications, and any provider or source in your ecosystem.
**Common workflow**
The workflow for all three of the use cases that we discuss below share several core common steps:
1. Obtain a token from the environment you are working in (i.e. a build/CI tool such as GitLab or GitHub, or Kubernetes for applications running on Kubernetes).
2. Pass the token, via [client_credentials](https://goauthentik.io/docs/providers/oauth2/client_credentials), to authentik.
3. In the response, authentik returns a JWT (JSON Web Token).
4. The token is then used to authenticate requests to other services elsewhere. (These other services need to check the token for its validity, which can be done with the [proxy provider](https://goauthentik.io/docs/providers/proxy/) in authentik for example).
**Three authentik use cases**
Lets take a look at three specific use cases for implementing M2M with authentik.
**1. Building Docker images and passing them to a [Docker registry](https://docs.docker.com/registry/)**
After building and testing your application, you might want to package your application as a Docker image and push it to a registry so that others can use it for deployment.
For this use case, you can use M2M with authentik to push the package to your registry without needing to login yourself, or needing a password, or even a pre-defined service account, to the registry. Instead, you can create a policy with authentik to allow a specific repository in your CI platform to push to the Docker registry. When logging into the registry, you can use the token you already have access to from the platform youre running on, and the rest happens behind the scenes!
For a real-life example, with code samples, take a look at my blog “[Setup a docker registry for passwordless Docker builds with GitHub/GitLab using authentik](https://beryju.io/blog/2022-06-github-gitlab-passwordless-docker/)”, which provides step-by-step instructions with code blocks.
**2. Collect Prometheus metrics from multiple clusters**
If you use Prometheus to monitor multiple Kubernetes clusters, you might want to collect all Prometheus metrics and put them in one place, using something like [Thanos](https://thanos.io/) or [Mimir](https://grafana.com/oss/mimir/) in order to better analyze the data. Using M2M functionality in authentik, you can simplify authentication, so that the source (the cluster sending the metrics, in this case) can authenticate itself with the receiving target cluster.
In this use case, you will create an expression policy, in which you define service accounts to allow communication between that specific cluster and authentik.
- You create an OAuth Source for each cluster (since each cluster usually has its own unique JWT Signing key). On the **Create a new source** panel, select **OpenID OAuth Source** as the type, and then click **Next**. Then you will need to populate the following fields:
- **Consumer key**, **Consumer secret**, **Authorization URL**, **Access token URL**, and **Profile URL, and OIDC JWKS** (to obtain the key for the cluster, run the command `kubectl get --raw /openid/v1/jwks`).
- You can create a proxy provider to authenticate the incoming requests, where the proxy provider functions like a traditional reverse-proxy, sending traffic to Thanos or Mimir in the cluster but also requiring authentication for any requests. When defining your proxy provider, use the following syntax:
```python
# Replace these values with the namespace and service-account name for your prometheus instance
allowed_namespace = "prometheus-namespace"
allowed_service_account = "prometheus-sa"
jwt = request.context.get("oauth_jwt", None)
if not jwt:
return False
allowed_sa = [
f"system:serviceaccount:{allowed_namespace}:{allowed_service_account}",
]
return jwt["sub"] in allowed_sa
```
Then the rest is same as in the first use case; obtain a JWT from the K8s cluster, send the token to authentik, get back a diff token, then send that token to Thanos, Mimir, or where ever you want to store the metrics. Prometheus then uses that token to authenticate incoming requests from the other clusters. Actually, you can configure Promethesus to do the token exchange work, by using the `oauth2` configuration option. For an example of how this can be set up, refer to [this YAML file](https://github.com/BeryJu/k8s/blob/b4b26e5/common-monitoring/monitoring-system/prom-agent.yaml#L24-L39), where I configured `remote_write`.
**3. GitOps with M2M and Loki**
This third use case is a twist on the first two use cases, but even more simple.
We can utilize GitOps to configure [Loki alerting rules](https://grafana.com/docs/loki/latest/alert/), by using GitHub actions and a proxy provider to make Loki publicly accessible. This setup combines the use of a CI platform (as in the first use case) and using a proxy provider to authenticate requests (as in the second use case). In this third case, the authentication is for the requests from GitHub Actions to Loki.
- Create an OAuth Source for GitHub, selecting **OpenID OAuth Source** as the type. Then, instead of populating the **OIDC JWKS** field, you use the **OIDC JWKS URL** field and set that to https://token.actions.githubusercontent.com/.well-known/jwks.
- As with the second use case, create proxy provider, which acts like a traditional reverse-proxy, sending traffic to Loki, but also authenticating any requests.
- Create an expression policy, using the following syntax:
```python
# Replace the two values below
github_user = "my-user"
github_repo = "my-repo"
jwt = request.context.get("oauth_jwt", None)
if not jwt:
return False
if jwt["iss"] != "https://token.actions.githubusercontent.com":
return False
if jwt["repository"] != f"{github_user}/{github_repo}":
return False
return True
```
- Finally, call a snippet in a GitHub composite action (this can be done manually or programmatically) to exchange the tokens between the GitHub action and Loki. The proxy provider then verifies the tokens and forwards the requests to Loki.
### Whats next
Look for our upcoming tutorial about configuring machine-to-machine communication using authentik. As part of the tutorial, we will provide a GitHub composite action that bundles the multiple steps involved in token creation and exchange into a single, reusable action, instead of needing multiple `run` commands.
Wed like to hear from you about how you use M2M, or how you plan to in the future. And as always, if you are interested in collaborating with us on our M2M functionality, or contributing to our documentation, visit us in our [GitHub repository](https://github.com/goauthentik/authentik) or reach out to us at [hello@goauthentik.io](mailto:hello@goauthentik.io).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

View File

@ -1,81 +0,0 @@
---
title: "We need to talk about SCIM: More deviation than standard"
description: "SCIMs many deviations, undocumented edge cases, and lack of official test coverage make it an especially complex protocol to implement."
slug: 2023-10-05-SCIMs-many-deviations
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- SCIM
- SSO
- open source
- community
- identity provider
- security
- authentication
hide_table_of_contents: false
image: ./image1.png
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
As a young security company, weve been working on our implementation of SCIM (System for Cross-domain Identity Management), which Ill share more about below. SCIM is in many ways a great improvement on LDAP, but weve run into challenges in implementation and some things just seem to be harder than they need to be. Is it just us?
!["authentik admin interface"](./image1.png)
<!--truncate-->
# Improvements on LDAP
From a security standpoint, its wise not to expose LDAP (Lightweight Directory Access Protocol) to the internet if youre using Active Directory, OpenLDAP, FreeIPA or anything similar as your source of truth for authentication. SCIM fills a need for directory synchronization in a cloud-native world in which many companies arent hosting the software they use on their own servers.
SCIM, being an HTTP API specification, is much simpler and (in theory) gives you less to worry about than LDAP (being its own specific protocol). SCIM also offers time- and cost-saving advantages over Just in Time provisioning, especially for scaling companies. SCIM can save hours of company time for IT admins who no longer have to manually create individual accounts across multiple applications for new team members. Offboarding is also streamlined as departing team members can be deprovisioned automatically, preventing unauthorized access.
Most modern SaaS applications support SCIM, making it essential for security vendors to support the protocol, but it does come with its drawbacks.
# Growing pains
authentik currently supports SCIM going outwards; what this means is that authentik is your source of truth/central directory, and you can use authentik together with a tool like [Sentry](https://sentry.io) that supports SCIM. In this case all your users or employees in authentik automatically get created in Sentry, with their correct group assignment, and they can just log in.
Most of the information and commentary I see about SCIM focuses on the advantages described above, but I dont see a lot of talk about the pitfalls of SCIM. Im sharing our experiences here and am curious if others have found the same or can tell me how theyre avoiding these (I would love to hear that were doing this wrong actually!).
## Deviation from standards isnt well documented
Implementing a protocol based on reading the RFCs and then writing the code is in itself not fun (to be fair, this is true for implementing any protocol based on a standard). Having implemented SCIM in line with the specification though, once we actually started testing with different solutions that can receive SCIM, we discovered a lot of quirks along the lines of x solution doesnt do y (which the documentation says they should) or they do it slightly differently, and so on.
This leads to a lot of workarounds which shouldnt be necessary or things that simply dont work without a clear cause. For example, when we started testing SCIM with Sentry, we ran into a lot of deviations (to their credit these were mostly listed in their [documentation](https://docs.sentry.io/product/accounts/sso/#scim-provisioning)). One of the issues I ran into when testing locally was when we created a user with SCIM, it just returned an error saying, “Please enter a valid email address” even though we _had_ sent it a valid email address. At least Sentry has the advantage of being open source, so we can just go and look at the code and see whats happening, but this is still no small effort and you dont have that option with closed source solutions.
You can see other examples of confusing/unexpected behavior from SCIM [here](https://github.com/goauthentik/authentik/issues/5396) and [here](https://github.com/goauthentik/authentik/issues/6695).
## Testing isnt built out
Some protocols make a big effort to uphold the adherence to the standard. OpenID Connect is another standard thats well defined by multiple RFCs, but also has a lot of room for vendor-specific quirks. However, with OpenID we have the reassurance that the [OpenID Foundation](https://openid.net/foundation/) is behind it.
The OpenID Foundation is a non-profit standards body of which Authentik Security is a member, but anyone can join to contribute to working groups that support implementation. OpenID Connect offers an [entire test suite](https://openid.net/certification/about-conformance-suite/) made up of hundreds of tests that you can run against your implementation, testing for edge cases and all the behaviors that they define. If you pass all the required tests you can send them the test results and get a [certification](https://openid.net/certification/) (which we are also working on) that your software adheres to the standards.
Instead of working in the dark and trying to make sure youve interpreted the specs correctly (while testing with vendors who might have their own interpretations), you have some reassurance that youre doing the right things when developing with OpenID Connect.
To my knowledge there isnt an official equivalent for SCIM—there are some smaller community projects that try to do something similar, but again, then you have to rely on someones interpretation of the standard. Even the [SCIM websites overview page](https://scim.cloud/) says, “Information on this overview page is not normative.”
## Updating a user is unnecessarily complex
As mentioned above, authentik currently supports SCIM in one direction, but we are [working on making it so that another application can send SCIM to authentik](https://github.com/goauthentik/authentik/pull/3051), to create users in it. In this process weve discovered that updating a user is surprisingly annoying to implement. With SCIM [you have two options to update a user](https://datatracker.ietf.org/doc/html/rfc7644#autoid-22):
- You can either send a request to replace the user (for which you have to send _all_ the users data), or
- You can send a patch request
A lot of vendors use the patch request option to update group membership: they send a patch request for a user and just say, for example, “Add that group,” or “Remove that group.” This approach makes more sense in the case of an advanced user with tons of groups, as youre not replacing everything, just making adjustments to their membership. However, this patch request is done with a custom filtering expression language which is extremely and needlessly complex.
My first thought when I encountered this was, “Okay, can I just parse this with RegEx?” but its not possible. The correct way to parse it is with [ANTLR](https://www.antlr.org/), a parser generator for different kinds of grammars. The thing about ANTLR is that its a type of tool usually used to build a compiler: it allows you to define a grammar for which it generates a parser that can then parse things in said grammar. Its not typically used for filtering language for directories and there are a lot of existing syntaxes that could have been used for this purpose. While luckily some people have written a full grammar for this, I was hoping that there would at least be an official definition for an ANTLR grammar.
# Immaturity bites
LDAP, being the more mature protocol (introduced in the 90s), has the advantage that deviations have been well documented and kinks ironed out. There are a handful of “standard” implementations like Active Directory, FreeIPA and some others. Similar to SAML support—theres just been a lot more time to document edge cases and workarounds.
SCIM, despite being around since 2015, is still subject to a lot of different interpretations of the standard, which leads to varying implementations and quirks with how vendors do SCIM. Theres a maturity challenge at work here in both senses—from the vendors but also from ourselves. Since weve added SCIM to our product a lot later than LDAP, theres still a lot of room for us to catch up and make our implementation better.
_Have you worked on SCIM implementation? Got advice for us? Wed love to hear from you in the comments._

Binary file not shown.

Before

Width:  |  Height:  |  Size: 688 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 368 KiB

View File

@ -1,172 +0,0 @@
---
title: "How small companies get taxed out of security and why the whole industry suffers"
description: "Software vendors have managed to normalize charging exorbitant prices for basic security features."
slug: 2023-10-18-taxed-out-of-security
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- security tax
- SSO
- SSO tax
- enterprise
- pricing
- identity provider
- security
- authentication
hide_table_of_contents: false
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
Lets say youre working at a small startup: Youre the CTO, your CEO is a good friend, and you have a couple of developers working with you from a previous company. Youre building your initial tech stack, and you start where else? with GitHub.
The [pricing](https://github.com/pricing) is simple enough. Theres a pretty feature-rich free plan, but youre willing to pay up because the Team plan includes features for restricting access to particular branches and protecting secrets.
But the enterprise plan, the plan that costs more than four times as much per user per month the plan that seems targeted at, well, enterprises promises “Security, compliance, and flexible deployment.”
> **Is security… not for startups?**
The feature comparison bears this out: Only the enterprise plan offers single-sign-on (SSO) functionality as part of the package a feature that security experts have long agreed is essential. But dont get mad at GitHub.
Do you want [Box](https://www.box.com/pricing)? Youll have to pay twice as much for external two-factor authentication.
Do you want [Mailtrap](https://mailtrap.io/pricing/)? The team, premium, and business plans wont do. Only the enterprise plan, which costs more than $300 per month more than the team plan, offers SSO.
Do you want [Hubspots marketing product, but with SSO?](https://www.hubspot.com/pricing/marketing/enterprise?products=marketing-hub-professional_1&term=annual) Prepare to pay $2,800 more per month than the next cheapest plan.
And these are only a few examples. [SSO.tax](https://sso.tax/), a website started by Rob Chahin, gathers many more. If you look through, youll see companies like [SurveyMonkey](https://www.surveymonkey.com/pricing/details/) and [Webflow](https://webflow.com/pricing) even restrict SSO to enterprise plans with a _Contact Us_ option instead of a price.
!["pricing page"](./image1.png)
<!--truncate-->
Youll also notice that many of the listings are outdated (the Hubspot listing was last updated in 2018, for example, but we quoted the current price).
Many developers are likely already familiar with the concept of an SSO tax, and some are familiar with the broader idea of a security tax. Fewer know why, despite these concepts entering the lexicon, vendors can still get away with quietly restricting basic security and sign-in features to expensive enterprise plans.
## Three types of security taxes
Vendors have managed to normalize charging exorbitant prices for basic security features. Here, were not even necessarily talking about often complex logging or monitoring features the mere ability to sign in to the software itself is an opportunity to upcharge.
Its a blurry line, but its worth distinguishing between _valuable_ features and _value-added_ features. Unlike other features, which are valuable but part of the base product, value-added features add incremental value on top of the base product. So, we expect GitHub to be basically operational on the free plan but look to other plans to see whether we want to add, say, a wiki feature.
Security taxes are baseline features pretending to be value-added features, and vendors can charge them by bundling them with those features. These security taxes are often exploitative because companies have to pay for more features than they need just to get a security feature that should have been in the base product.
A baseline feature is turned into a revenue-generation tool often far out of step with its actual maintenance costs.  “If your SSO support is a 10% price hike,” Chahin writes, “youre not on this list. But these percentage increases are not maintenance costs, theyre revenue generation because you know your customers have no good options.”
Research from Grip, an identity risk management company, shows how the [lack of good options](https://www.grip.security/blog/why-sso-doesnt-protect-80-of-your-saas) plays out. Grips research shows that 80% of the SaaS applications employees use are not in their companies SSO portals. In their interviews with CISOs, SSO licensing costs i.e., the SSO tax were the top reason.
The same logic that applies to the SSO tax also applies to two other security taxes: the MFA tax and the user tax.
Security experts widely agree that SSO is essential for security, but multi-factor authentication is more basic still, making the act of charging extra for MFA even more egregious. But, as we saw in the intro, companies like Box charge extra for multiple authentication methods.
The user tax is more subtle. When companies charge excessive amounts per-user to secure those users' accounts, users must either choose to pay the security tax or engage in the highly insecure practice of sharing credentials among several individuals. To be clear, many companies charge on a per-user or per-seat basis (including Authentik), so you cant call it a tax until the additional costs really become exorbitant.
## Why the anti-SSO tax movement failed
The SSO tax has become the most recognized of the three security taxes above.
By now, there seems to be broad acceptance that the SSO tax is unfair largely thanks to the SSO Wall of Shame but there hasnt been much change from software vendors.
A grassroots effort like the SSO Wall of Shame would seem effective at public embarrassment. Still, even companies that target users who know better, such as [Docker](https://www.docker.com/pricing/) and developers, or [JFrog](https://jfrog.com/pricing/) and security engineers, charge an SSO tax.
Future efforts against security taxes will have to keep in mind the three reasons the SSO tax movement failed if change is ever going to happen.
**1. The SSO tax is too profitable and too easy to charge**
The most obvious reason is also the strongest. The very thing were complaining about vendors charging too much for a feature that isnt even that expensive to build or maintain is exactly why they charge it.
Ben Orenstein, co-founder and CEO of remote pair-programming app Tuple, writes about [why SSO should be “table stakes”](https://tuple.app/blog/sso-should-be-table-stakes) and why so many other companies (including Tuple, before this post) charged it.
“If youre a new SaaS founder and you want to maximize your revenue,” he writes, “I recommend you create an enterprise tier, put SSO in it, and charge 2-5x your normal pricing.” He even explains that because “SSO costs close-to-nothing after a little automation, this price increase is all profit.”
The math is pretty undeniable, proving Chahins basic idea: Vendors can add SSO to an enterprise tier and charge much more than it costs to maintain it.
Patrick McKenzie, formerly from Stripe, has tweeted about the [same logic](https://twitter.com/patio11/status/1481293496506253321?s=20&t=GSqe0KHLuJaY7TYPS-p4_w). “SSO is a segmentation lever,” he writes, “and a particularly powerful one because everybody in the sophisticated-and-well-monied segment is increasingly forced to purchase it.”
Both McKenzie and Orenstein emphasize customers being “forced” to adopt an SSO plan. Many companies are selling into regulated industries, so theyll likely be forced to upgrade all of their software to whichever plan includes SSO.
**2. The PR risk is too low, and security taxes are too normalized**
Orenstein writes, “People will get a little mad at you, but not much, because just about everyone does this,” and _just about everyone does this_ links to the SSO.tax site. By now, the SSO Wall of Shame is proof the SSO tax exists, not so much a viable effort at change.
A big part of the explanation is that the Wall of Shame was primarily one persons effort, whereas the companies that wanted to keep charging these taxes were larger and much more powerful. The vendors charging the SSO tax had the resources to simply outlast the Wall of Shame.
Many of these vendors also received some benefit of the doubt. SSO generally takes some effort to build and some resources to maintain especially custom SAML setups so vendors have been able to rely on a little plausible deniability.
A few companies have tried to make some attention by removing the SSO tax including Tuple and [Scalr](https://www.scalr.com/blog/sso-tax) but none have really gone viral for the effort.
**3. The collective action problem traps individuals**
The previous two reasons the SSO tax movement failed focused on problems at the individual company level, but the greatest reason might be industry-wide.
If we zoom out, the SSO tax isnt just a business decision its a collective action problem.
A collective action problem is when individuals in a given situation would benefit from cooperating but, because of other incentives, work against each other to the detriment of all. People keep driving cars, for example, due to a wide variety of valid individual incentives but traffic, pollution, and climate change eventually hurt the collective including the drivers.
As the software supply chain has evolved, open-source adoption has grown, and software companies have become increasingly interconnected, software security has become an issue that affects the entire industry. The SSO tax shows, however, that a collective action problem hinders taking the steps necessary to improve security for everyone.
In the past, companies considered security in an organization vs. attacker model, as one entity building a perimeter to defend itself against targeted attackers. But in modern security, organizations are so interconnected that attackers can leap from organization to organization and move laterally from low-value vulnerabilities to high-value exploits.
When attackers [hacked Target in 2013](https://slate.com/technology/2022/04/breached-excerpt-hartzog-solove-target.html#:~:text=In%20caper%20movies%2C%20the%20criminals,party%20vendor%20hired%20by%20Target.), they didnt go after Target directly; they entered via a third-party vendor Target had hired. And when Log4j became [headline news in 2022](https://builtin.com/cybersecurity/log4j-vulerability-explained), it wasnt because one attacker found one exploit; a vast range of companies suddenly realized they were vulnerable because they had all adopted the same open-source component.
The more interconnected organizations are, the more security becomes a collective action problem that demands companies shift from prioritizing profits via security taxes to pursuing industry-wide security by offering accessible security features and reinforcing security best practices.
Ed Contreras, Chief Information Security Officer at Frost Bank, said it well in an [interview with CISO Series](https://cisoseries.com/we-shame-others-because-were-so-right-about-everything): “With single sign-on, were protecting both of our companies” and that the SSO tax, as a result, is an “atrocity.”
## Compromise is the only way out
For the reasons above, the movement to remove the SSO tax has seemingly ground to a halt. Vendors are still profiting, companies are still paying, and the further outdated the Wall of Shame becomes, the less anyone feels ashamed.
But that doesnt mean progress hasnt been made. Coining the term “SSO tax” named the issue and expanding the idea of security taxes has pushed people toward new ways of thinking about security. If pricing plans are to change, however, we need to acknowledge the strong reasons for charging the SSO tax and offer compromises that satisfy all parties.
### Offer cheaper ways to authenticate
Sometimes, heated discussions about the SSO tax on Hacker News miss the fact that SSO technology isnt always easy to build and maintain.
For example, Klaas Pieter Annema, engineering manager at Sketch, [writes](https://twitter.com/klaaspieter/status/1562353404143435776), “I was briefly EM for the team maintaining SSO at Sketch. Supporting Google and Microsoft is easy. Supporting whatever wonky homebuilt some large enterprises use is a huge time sync [sic].”
One compromise is to split these two situations apart. Vendors can offer simple ways to provide SSO for cheap or free but charge for the more complex, customized ways.
Bitrise, for example, offers [standard SSO](https://bitrise.io/plans-pricing) across its Hobby, Starter, and Teams pricing tiers but only offers custom SAML at its Velocity and Enterprise tiers.
![!["pricing tiers for Bitrise with included free SSO"]](./image3.png)
### Charge less
Even in the original Wall of Shame, Chahin writes, “While Id like people to really consider it a bare minimum feature for business SaaS, Im OK with it costing a little extra to cover maintenance costs. If your SSO support is a 10% price hike, youre not on this list.”
A compromise is already available: Vendors can charge for the labor to offer SSO but not use SSO as a tool for revenue generation. Vendors can charge less outright or move SSO to cheaper pricing tiers.
As it turns out, this shift might benefit vendors in the long run. According to research from [Gergely Orosz](https://newsletter.pragmaticengineer.com/p/vendor-spend-cuts), nearly 90% of companies now consider it a goal to reduce vendor spend.
![!["diagram to illustrate a poll showing that nearly 90% of companies now consider it a goal to reduce vendor spend from <a href="https://pragmaticengineer.com">pragmaticengineer.com</a>"]](./image4.png)
The SSO tax has become an obvious target. Any vendor charging an SSO tax is more likely to face spending cuts from customers and less likely to get conversions from newly price-conscious customers.
Orosz writes, “Consider removing the SSO tax to boost conversions for smaller companies. CTOs at early-stage companies have mentioned they are selective when onboarding to SaaSes that charge an SSO tax.”
Orosz also quotes a few anonymized executives, with one CTO saying, “We're trying to roll out SSO, but many SaaS vendors charge a security tax, so we've had to be selective about which services we upgrade.”
### Unbundle security, support, and value-added features
Security and value-added features, as we covered earlier, are very different kinds of features. One way vendors disguise the SSO tax is by charging for these features as a bundle; therefore, one way to compromise is to unbundle these features so that vendors can charge for value-added features but not for baseline security features.
Once vendors unbundle these features, the previous two compromises make more sense: they can either charge less or introduce separate, cheaper SSO features. Similarly, companies can also distinguish between SSO feature costs and SSO support costs.
In the previous example, when Klaas Pieter Annema, engineering manager at Sketch, mentioned how SSO frequently became a huge time sink, he also wrote that Sketch “ended up with a rotating support role largely to free up time for these customers.”
When companies refer to the costs of SSO, this cost is often what theyre referring to not the sheer cost of building and maintaining the feature but the ongoing support costs. That points to another potential compromise: Vendors could charge for an SSO feature with 24/7 support and charge less for an SSO feature that leaves maintenance up to the customer.
## Security vendors are caught in the middle, but developers can build a way out
Throughout this article, weve hardly mentioned a central party: SSO vendors. Despite the obvious centrality of SSO vendors and SSO products and tools, security vendors have little leverage when it comes to the SSO tax.
What we can do, however, is argue for a shift in industry norms: As weve written before, the buy vs. build framework is outdated, and its no longer obvious that companies should be buying by default.
The SSO tax persists because its easy for vendors to charge, and companies dont consider other options. As companies consider those options and rediscover why [identity is fun](https://goauthentik.io/blog/2023-08-16-lets-make-identity-fun-again), the SSO tax will become less and less viable.

View File

@ -1,100 +0,0 @@
---
title: Okta got breached again and they still have not learned their lesson
description: “HAR files uploaded to Okta support system contained session tokens.”
slug: 2023-10-23-another-okta-breach
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- security breach
- SSO
- malicious hacker
- HAR file
- session token
- identity provider
- security
- authentication
- okta
- cloudflare
- beyondtrust
- har
hide_table_of_contents: false
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and Auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
## Another security breach for Okta
Late last week, on October 20, Okta publicly [shared](https://sec.okta.com/harfiles) that they had experienced a security breach. Fortunately, the damage was limited. However, the incident highlights not only how incredibly vigilant vendors (especially huge vendors of security solutions!) must be, but also how risky the careless following of seemingly reasonable requests can be.
We now know that the breach was enabled by a hacker who used stolen credentials to access the Okta support system. This malicious actor then collected session tokens that were included in HAR files (HTTP **_Archive_** Format) that were uploaded to the Okta support system by customers. A HAR file is a JSON **_archive file_** format that stores session data for all browsers running during the session. It is not rare for a support team troubleshooting an issue to request a HAR file from their customer: [Zendesk](https://support.zendesk.com/hc/en-us/articles/4408828867098-Generating-a-HAR-file-for-troubleshooting) does it, [Atlassian](https://confluence.atlassian.com/kb/generating-har-files-and-analyzing-web-requests-720420612.html) does it, [Salesforce](https://help.salesforce.com/s/articleView?id=000385988&type=1) as well.
So its not the HAR file itself; it was what was in the file, and left in the file. And, destructively, it is our collective training to not second-guess support teams; especially the support team at one of the worlds most renowned identity protection vendors.
But it is not all on Okta; every customer impacted by this hack, including 1Password (who communicated the breach to Okta on September 29), BeyondTrust (who communicated the breach on October 2), and Cloudflare (October 18) were "guilty" of uploading HAR files that had not been scrubbed clean and still included session tokens and other sensitive access data. (Cleaning an HAR file is not always a simple task, there are tools like [Google's HAR Sanitizer](https://github.com/google/har-sanitizer), but even tools like that don't 100% guarantee that the resulting file will be clean.)
## Target the ancillaries
An interesting aspect of this hack was that it exploited the less-considered vulnerability of Support teams, not considered to be the typical entry-way for hackers.
But security engineers know that hackers go in at the odd, unexpected angles. A classic parallel is when someone wants data that a CEO has, they dont go to the CEO, they go to (and through) the CEOs assistant!
Similarly, the support team at Okta was used as entry point. Once the hacker gained control of a single customers account, they worked to take control of the main Okta dashboard and the entire support system. This lateral-to-go-up movement through access control layers is common technique of hackers.
## Its the response… lesson not yet learned
The timing of Okta's response, not great. The initial denial of the incident, not great. And then, add insult to injury, theres what can objectively be labeled an [abysmal “announcement” blog](https://sec.okta.com/harfiles) from Okta on October 20.
Everything from the obfuscatory title to the blogs brevity to the actual writing… and importantly, the lack of any mention at all of BeyondTrust, the company that informed Okta on October 2nd that they suspected a breach of the Okta support system.
> “_Tracking Unauthorized Access to Okta's Support System_” has to be the lamest of all confession titles in the history of security breach announcements.
Not to acknowledge that their customers first informed them seems like willful omission, and it absolutely illustrates that Okta has not yet learned their lesson about transparency, trusting their customers and security partners, and the importance of moving more quickly towards full disclosure. Ironically, BeyondTrust thanks Okta for their efforts and communications during the two week period of investigation (and denial).
Back to the timing; BeyondTrust has written an excellent [article about the breach](https://www.beyondtrust.com/blog/entry/okta-support-unit-breach), with a rather damning timeline of Oktas responses.
> “We raised our concerns of a breach to Okta on October 2nd. Having received no acknowledgement from Okta of a possible breach, we persisted with escalations within Okta until October 19th when Okta security leadership notified us that they had indeed experienced a breach and we were one of their affected customers.”([source](https://www.beyondtrust.com/blog/entry/okta-support-unit-breach))
The BeyondTrust blog provides important details about the persistence and ingenuity of the hacker.
> “Within 30 minutes of the administrator uploading the file to Oktas support portal an attacker used the session cookie from this support ticket, attempting to perform actions in the BeyondTrust Okta environment. BeyondTrusts custom policies around admin console access initially blocked them, but they pivoted to using admin API actions authenticated with the stolen session cookie. API actions cannot be protected by policies in the same way as actual admin console access. Using the API, they created a backdoor user account using a naming convention like existing service accounts.”
Oddly, the BeyondTrust blog about the breach does a better job of selling Okta (by highlighting the things that went right with Okta) than the Okta announcement blog. For example, in the detailed timeline, BeyondTrust points out that one layer of prevention succeeded when the hacker attempted to access the main internal Okta dashboard, but because Okta still views dashboard access as a new sign in, it prompted for MFA thus thwarting the log in attempt.
Cloudflares revelation of their communications timeline with Okta shows another case of poor response timing by Okta, another situation where the customer informed the breached vendor first, and the breached company took too long to publicly acknowledge the breach.
> “In fact, we contacted Okta about the breach of their systems before they had notified us.” … “We detected this activity internally more than 24 hours before we were notified of the breach by Okta.” ([source](https://blog.cloudflare.com/how-cloudflare-mitigated-yet-another-okta-compromise/))
In their blog about this incident, Cloudflare provides a helpful [set of recommendations](https://blog.cloudflare.com/how-cloudflare-mitigated-yet-another-okta-compromise/) to users, including sensible suggestions such as monitoring for new Okta users created, and reactivation of Okta users.
Which just takes us back to the rather lean response by Okta; their customers wrote much more informative and helpful responses than Okta themselves.
## Keep telling us
> We cant be reminded often enough about keeping our tokens safe.
This incident at Okta is parallel to the breach at Sourcegraph that we recently [blogged about](https://goauthentik.io/blog/2023-08-11-sourcegraph-security-incident), in which a token was inadvertently included in a GitHub commit, and thus exposed to the world. With Okta, it was session tokens included in an uploaded HAR file, exposed to a hacker who had already gained access to the Okta support system.
But talk about things that keep security engineers up at night; timing was tight on this one.
The initial breach attempt was noticed by BeyondTrust within only 30 minutes of their having uploaded a HAR file to Okta Support. By default (and this is a good, strong, industry-standard default) Okta session tokens have a lifespan of two hours. However, with hackers moving as quickly as these, 2 hours is plenty long for the damage to be done. So, the extra step of scrubbing clean any and all files that are uploaded would have saved the day in this case.
> Keep your enemies close, but your tokens even closer.
## Stay vigilant out there
Lessons learned abound with every breach. Each of us in the software and technology area watch and learn from each attack. In the blog by BeyondTrust, they provide some valuable steps that customers and security teams can take to monitor for possible infiltration.
Strong security relies on multiple layers, enforced processes, and defense-in-depth policies.
> “The failure of a single control or process should not result in breach. Here, multiple layers of controls -- e.g. Okta sign on controls, identity security monitoring, and so on, prevented a breach.” ([source](https://www.beyondtrust.com/blog/entry/okta-support-unit-breach))
A [writer on HackerNews](https://news.ycombinator.com/item?id=37963074) points out that Okta has updated their [documentation](https://help.okta.com/oag/en-us/content/topics/access-gateway/troubleshooting-with-har.htm) about generating HAR files, to tell users to sanitize the files first. But whether HAR files or GutHub commits, lack of MFA or misuse of APIs, we all have to stay ever-vigilant to keep ahead of malicious hackers.
## Addendum
This blog was edited to provide updates about the [1Password announcement](https://blog.1password.com/okta-incident/) that they too were hacked, and to clarify that the hacker responsible for obtaining session tokens from the HAR files had originally gained entry into the Okta support system using stolen credentials.

View File

@ -1,130 +0,0 @@
---
title: 3 ways you (might be) doing containers wrong
description: “Using containers is not a best practice in itself. Here are some mistakes beginners make with containers, and how we set them up correctly at authentik.”
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- application
- runtime
- SSO
- Docker
- containers
- :latest
- identity provider
- security
- authentication
hide_table_of_contents: false
---
_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and Auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._
---
There are two ways to judge an application:
1. Does it do what its supposed to do?
2. Is it easy to run?
This post is about the second.
Using containers is not a best practice in itself. As an infrastructure engineer by background, Im pretty opinionated about how to set up containers properly. Doing things the “right” way makes things easier not just for you, but for your users as well.
Below are some common mistakes that I see beginners make with containers:
1. Using one container per application
2. Installing things at runtime
3. Writing logs to files instead of stdout
## Mistake #1: One container per application
There tend to be two mindsets when approaching setting up containers:
- The inexperienced usually think 1 container = 1 application
- The other option is 1 container = 1 service
Your application usually consists of multiple services, and to my mind these should always be separated into their own containers (in keeping with the [Single Responsibility Principle](https://en.wikipedia.org/wiki/Single-responsibility_principle)).
For example, authentik consists of four components (services):
- Server
- Worker
- Database
- Cache
With our deployment, that means you get four different containers because they each run one of those four services.
### Why you should use one container per _service_
At the point where you need to scale, or need High Availability, having different processes in separate containers enables horizontal scaling. Because of how authentik deploys, if we need to handle more traffic we can scale up to 50 servers, rather than having to scale up _everything_. This wouldnt work if all those components were all bundled together.
Additionally, if youre using a container orchestrator (whether thats Kubernetes or something simpler like [Docker Compose](https://goauthentik.io/docs/installation/docker-compose)), if its all bundled together, the orchestrator cant distinguish between components because theyre all in the black box of your container.
Say you want to start up processes in a specific order. This isnt possible if theyre in a single container (unless you rebuild the entire image). If those processes are separate, you can just tell Docker Compose to start them up in the order you want, or you can run specific components on specific servers.
Of course, your application architecture and deployment model need to support this setup, which is why its critical to think about these things when youre starting out. If youre reading this and thinking, I have a small-scale, hobby project, this doesnt apply to me—let me put it this way: you will never regret setting things up the “right” way. Its not going to come back to bite you if your situation changes later. It also gives users who install the application a lot more freedom and flexibility in how _they_ want to run it.
## Mistake #2: Installing things at runtime
Your container image should be complete in itself: it should contain all code and dependencies—everything it needs to run. This is the point of a container—its self contained.
Ive seen people set up their container to download an application from the vendor and install it into the container on startup. While this does work, what happens if you dont have internet access? What if the vendor shut down and that URL now points to a malicious bit of code?
If you have 100 instances downloading files at startup (or end up scaling to that point), this can lead to rate limiting, failed downloads, or your internet connection getting saturated—its just inefficient and causes problems that can be avoided.
### Also, dont use :latest
This leads me to a different but related bad practice: using the `:latest` tag. Its a common pitfall for folks who use containers but dont necessarily build them themselves.
Its easy to get started with the `:latest` tag and its understandable to want the latest version without having to go into files and manually edit everything. But what can happen is that you update and suddenly its pointing to a new version and breaking things.
Ive seen this happen where youre just running something on a local server and your disk is full, so you empty out your Docker images. The next time you pull, its with a new version which now no longer works and youre stuck trying to figure out what version you were on before.
### Instead: Pin your dependencies
You should be pinning your dependencies to a specific version, and updating to newer versions intentionally rather than by default.
The most reliable way to do this is with a process called GitOps:
- In the context of Kubernetes, all the YAML files you deploy with Kubernetes are stored in the central Git repository.
- You have software in your Kubernetes cluster that automatically pulls the files from your Git repo and installs them into the cluster.
- Then you can use a tool like [Dependabot](https://github.com/dependabot) or [Renovate](https://github.com/renovatebot/renovate) to automatically create PRs with a new version (if there is one) so you can test and approve it, and its all captured in your Git history.
GitOps might be a bit excessive if youre only running a small hobby project on a single server, but in any case you should still pin a version.
For a long time, authentik purposefully didnt have a `:latest` tag, because people would use it inadvertently (sometimes not realizing they had an auto-updater running). Suddenly something wouldnt work and there wasnt really a way to downgrade.
We have since added it due to popular request. This is how authentiks version tags work:
- Our version number is 3 digits reflecting the date of the release, so the latest currently is [2023.10.1](https://goauthentik.io/docs/releases/2023.10).
- You can either use 2023.10.1 as the tag, pinning to that specific version
- You can pin to 2023.10, which you means that you always get the latest patch version, or
- You can use 2023, which means you always get the latest version within that year.
The principle is roughly the same with any project using [SemVer](https://semver.org/): you could just lock to v1, which means you get the latest v1 with all minor patches and fixes, without breaking updates. Then you switch to v2 when youre ready.
With this approach you are putting some trust in the developer not to publish any breaking changes with the wrong version number (but youre technically always putting trust in some developer when using someone elses software!).
## Mistake #3: Writing logs to files instead of stdout
This is another issue on the infrastructure side that mainly happens when you put legacy applications into containers. It used to be standard that applications put their log output into a file, and youd probably have a system daemon set up to rotate those files and archive the old ones. This was great when everything ran on the same server without containers.
A lot of software still logs to files by default, but this makes collecting and aggregating your services logs much harder. Docker (and containers in general) expect that you log to standard output so your orchestration platform can route the logs to your monitoring tool of choice.
Docker puts the logs into a JSON file that it can read itself and see the timestamps and which container the log refers to. You can set up log forwarding with both Docker and Kubernetes. If you have a central logging server, the plugin gets the standard output of a container and sends it to that server.
Not logging to `stdout` just makes it harder for everyone, including making it harder to debug: Instead of just running `docker logs` + the name of the container, you need to `exec` into the container, go to find the files, then look at the files to start debugging.
### This bad practice is arguably the easiest one to work around
As an engineer you can easily redirect the logs back from a file into the standard output, but theres no real reason not to do it the “correct” way.
There arent many use cases where theres an advantage to writing your logs directly to a file instead of stdout—in fact the main one is for when youre making the first mistake (having your whole application in one container)! If youre running multiple services in one container, then youll have logs from multiple different processes in one place, which _could_ be easier to work with in a file vs stdout.
Even if you specifically want your logs to exist in a file, by default if you run `docker logs` it just reads a JSON file that it adds the logs to, so youre not losing anything by logging to stdout. You can configure Docker to just put the logs into a plain text file wherever you want to.
Its a little simplistic, but Id encourage you to check out [The Twelve-Factor App](https://12factor.net/) which outlines good practices for making software thats easy to run.
Are you doing containers differently and is it working for you? Let us know in the comments, or send us an email at hello@goauthentik.io!

View File

@ -1,223 +0,0 @@
---
title: "IPv6 addresses and why you need to make the switch now"
description: "IPv6 addresses have been commercially available since 2010. But is there any compelling reason for sysadmins and security engineers to make the switch?"
slug: 2023-11-09-IPv6-addresses
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- authentik
- IP address
- IPv4
- IPv6
- IP address exhaustion
- NAT Gateway
- IETF
- Internet Engineering Task Force
- IANA
- Internet Assigned Numbers Authority
- IPv6 address format
- SSO
- security
- identity provider
- authentication
hide_table_of_contents: false
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
IPv6 addresses have been commercially available since 2010. Yet, after Googles IPv6 rollout the following year, the adoption by System Administrators and security engineers responsible for an entire organizations network has been slower than you might expect. Population size and the plethora of work and personal devices that accompany this large number of workers do not accurately predict which countries have deployed this protocol.
In this blog post, I explain briefly what IP addresses are and how they work; share why at Authentik Security we went full IPv6 in May 2023; and then set out some reasons why you should switch now.
## What are IP addresses?
IP Addresses are locations (similar to street addresses) that are assigned to allow system administrators and others to identify and locate every point (often referred to as a node) on a network through which traffic and communication passes via the internet. For example, every server, printer, computer, laptop, and phone in a single workplace network has its own IP address.
We use domain names for websites, to avoid having to remember IP addresses, though our readers who are sysadmin—used to referencing all sorts of nodes deep within their organizations networks—will recall them at the drop of a hat.
But, increasingly, since many devices are online and [96.6% of internet users now use a smartphone](https://www.oberlo.com/statistics/how-many-people-have-smartphones), most Internet of Things (IoT) devices that we have in our workplaces and homes _also_ have their own IP address. This includes:
- Computers, laptops and smartphones
- Database servers, web servers, mail servers, virtual servers (virtual machines), and servers that store software packages for distribution
- Other devices such as network printers, routers and services running on computer networks
- Domain names for websites, which are mapped to the IP address using Domain Name Servers (DNS)
IP addresses are centrally overseen by the Internet Assigned Numbers Authority ([IANA](https://www.iana.org/)), with five [Regional Internet Registries](https://www.nro.net/about/rirs/) (RIRs).
## What is the state of the IP landscape right now?
Well, its all down to numbers.
The previous version of this network layer communications protocol is known as IPv4. From our informed vantage point—looking over the rapid growth of ecommerce, business, government, educational, and entertainment services across the internet—its easy to see how its originator could not possibly have predicted that demand for IPv4 addresses would outstrip supply.
Add in the ubiquity of connected devices that allow us to access and consume those services and you can see the problem.
IP address exhaustion was foreseen in the 1980s, which is why the Internet Engineering Task Force ([IETF](https://www.ietf.org/)) started work on IPv6 in the early 1990s. The first RIR to run out of IPv4 addresses was ARIN (North America) in 2015, followed by the RIPE (Europe) in 2019, and LACNIC (South America) in 2020. The very last, free /8 address block of IPv4 addresses was issued by IANA in January 2011.
The following realities contributed to the depletion of the IPv4 addresses:
- IPv4 addresses were designed to use 32 bits and are written with decimal numbers
- This allowed for 4.3 billion IP addresses
The IPv4 address format is written in 4 groups of 4 numbers, each group separated by a period.
Even though IPv4 addresses still trade hands, its actually quite difficult now to buy a completely unused block. Whats more, theyre expensive for smaller organizations (currently around $39 each) and leasing is cheaper. Unless you can acquire them from those sources, youll likely now be issued IPv6 ones.
> Interesting historical fact: IPv5 was developed specifically for streaming video and voice, becoming the basis for VoIP, though it was never widely adopted as a standard protocol.
### IPv6 addresses, history and adoption
The development of IPv6 was initiated by IETF in 1994, and was published as a draft standard in December 1998. The use of IPv6, went live in June 2012, and was ratified as an internet standard in July 2017.
There is an often circulated metaphor from J. Wiljakkas IEEE paper, [Transition to IPv6 in GPRS and WCDMA Mobile Networks](https://ieeexplore.ieee.org/document/995863), stating that every grain of sand on every seashore could be allocated its own IPv6 address. Let me illustrate.
- IPv6 addresses were designed to use 128 bits and are written with hexadecimal digits (10 numbers from 1-10 and 6 letters from A-F).
- So, how many IPv6 addresses are there? In short, there are over 340 trillion IP addresses available!
The IPv6 address format is written in 8 groups of 4 digits (each digit can be made up of 4 bits), each group separated by a colon.
> Importantly, the hierarchical structure optimizes global IP routing, keeping routing tables small.
If you plan to make the switch to IPv6, its worth noting that youll need to ensure that your devices, router, and ISP all support it.
### Upward trend in the worldwide adoption by country
Over 42.9% of Google users worldwide are accessing search using the IPv6 protocol. Its intriguing to note which countries have a larger adoption of the IPv6 protocol than not:
- France 74.38%
- Germany 71.52%
- India with 70.18%
- Malaysia 62.67%
- Greece 61.43%
- Saudi Arabia 60.93%
And, yet China, Indonesia, Pakistan, Nigeria, and Russia lag surprisingly far behind many others in terms of adoption (between 5-15%) given their population size. Even many ISPs have been slow to switch.
You can consult Googles [per country IPv6 adoption statistics](https://www.google.com/intl/en/ipv6/statistics.html#tab=per-country-ipv6-adoption) to see where your location sits in the league table.
## Why we decided on a full IPv6 addresses deployment
The average internet user wont be aware of anything much beyond what an IP address is, if even that. However for system administrators, IP addresses form a crucial part of an organizations computer network infrastructure.
In our case, the impetus to use IPv6 addresses for authentik came from our own, internal Infrastructure Engineer, Marc Schmitt. We initially considered configuring IPv4 for internal traffic and, as an interim measure, provide IPv6 at the edge only (remaining with IPv4 for everything else). However, that would still have required providing IPv6 support for customers who needed it.
In the end, we determined it would be more efficient to adopt the IPv6 addresses protocol while we still had time to purchase, deploy, and configure it at our leisure across our existing network. We found it to be mostly a straightforward process. However, there are still some applications that did not fully support IPv6, but we were aided by the fact that we use open source software. This means that we were able to contribute back the changes needed to add IPv6 support to the tools we use. We were thrilled to have close access to a responsive community with some (not all!) of the tool vendors and their communities to help with any integration issues. [Plausible](https://plausible.io/), our web analytics tool, was especially helpful and supportive in our shift to IPv6.
### Future proofing IP addresses on our network and platform
While it seemed like there was no urgent reason to deploy IPv6 across our network, we knew that one day, it _would_ suddenly become pressing once ISPs and larger organizations had completely run out of still-circulating IPv4 addresses.
For those customers who have not yet shifted to IPv6, we still provide IPv4 support at the edge, configuring our load balancers to receive requests over IPv4 and IPv6, and forwarding them internally over IPv6 to our services (such as our customer portal, for example).
### Limiting ongoing spend
Deployment of IPv6 can be less expensive as time goes on. If wed opted to remain with IPv4 even temporarily, we knew we would have needed to buy more IPv4 addresses.
In addition, we were paying our cloud-provider for using the NAT Gateway to convert our IPv4 addresses—all of which are private—to public IP addresses. On top of that, we were also charged a few cents per GB based on users. The costs can mount up, particularly when we pull Docker images multiple times per day. These costs were ongoing and on top of our existing cloud provider subscription. With IPv6, however, since IP addresses are already public—and there is no need to pay for the cost of translating them from private to public—the costs are limited to paying for the amount of data (incoming and outgoing traffic) passing through the network.
### Unlimited pods
Specifically when using the IPv4 protocol, theres a limitation with our cloud provider if pulling IP addresses from the same subnet for both nodes and Kubernetes pods. You are limited by the number of pods (21) you can attach to a single node. With IPv6, the limit is so much higher that it's insignificant.
### Clusters setup
All original clusters were only configured for IPv4. It seemed like a good time to build in the IPv6 protocol while we were already investing time in renewing a cluster.
Wed already been planning to switch out a cluster for several reasons:
- We wanted to build a new cluster using ArgoCD (to replace the existing FluxCD one) for better GitOps, since ArgoCD comes with a built-in UI and provides a test deployment of the changes made in PRs to the application.
- We wanted to change the Container Network Interface (CNI) to select an IP from the same subnet as further future-proofing for when more clusters are added (a sandbox for Authentik Security and another sandbox for customers, for example). We enhanced our AWS-VPC-CNI with [Cilium](https://cilium.io/) to handle the interconnections between clusters and currently still use it to grab IPs.
## IPv6 ensures everything works out-of-the-box
If youre a system administrator with limited time and resources, youll be concerned with ensuring that all devices, software, or connections are working across your network, and that traffic can flow securely without bottlenecks. So, its reassuring to know that IPv6 works out of the box—reducing the onboarding, expense, and maintenance feared by already overburdened sysadmins.
### Stateless address auto-configuration (SLAAC)
When it comes to devices, each device on which IPv6 has been enabled will independently assign IP addresses by default. With IPv6, there is no need for static or manual DHCP IP address configuration (though manual configuration is still supported). This is how it works:
1. When a device is switched on, it requests a network prefix.
2. A router or routers on the link will provide the network prefix to the host.
3. Previously, the subnet prefix was combined with an interface ID generated from an interface's MAC address. However, having a common IP based on the MAC address raises privacy concerns, so now most devices just generate a random one.
### No need to maintain both protocols across your network or convert IPv4 to IPv6
Unless you already have IPv6 deployed right across your network, if your traffic comes in via IPv4 or legacy networks, youll have to:
- Maintain both protocols
- Route traffic differently, depending on what it is
### No IP addresses sharing
Typically, public IP addresses, particularly in Europe, are shared by multiple individual units in a single apartment building, or by multiple homes on the same street. This is not really a problem for private individuals, because most people have private IP addresses assigned to them by their routers.
However, those in charge of the system administration for  organizations and workplaces want to avoid sharing IP addresses. We are almost all subject to various country, state, and territory-based data protection and other compliance legislation. This makes it important to reduce the risks posed by improperly configured static IP addresses. And, given the virtually unlimited number of IP addresses now available with the IPv6 protocol, configuring unique IP addresses for every node on a network is possible.
## OK but are there any compelling reasons for _me_ to adopt IPv6 addresses _now_?
If our positive experience and outcomes, as well as the out-of-the-box nature of IPv6 have not yet persuaded you, these reasons might pique your interest.
### Ubiquitous support for the IPv6 addresses protocol
Consider how off-putting it is for users that some online services still do not offer otherwise ubiquitous identity protection mechanisms, such as sign-on Single Sign-on ([SSO](https://goauthentik.io/blog/2023-06-21-demystifying-security)) and Multi-factor Authentication (MFA). And, think of systems that do not allow you to switch off or otherwise configure pesky tracking settings that contradict data protection legislation.
Increasingly and in the same way, professionals will all simply assume that our online platforms, network services, smart devices, and tools support the IPv6 protocol—or they might go elsewhere. While IPv6 does not support all apps, and migration can be risky, putting this off indefinitely could deter buyers from purchasing your software solution.
### Man-in-the-Middle hack reduction
Man-in-the-Middle (MITM) attacks rely on redirecting or otherwise changing the communication between two parties using Address Resolution Protocol (ARP) poisoning and other naming-type interceptions. This is how many malicious ecommerce hacks target consumers, via spoofed ecommerce, banking, password reset, or MFA links sent by email or SMS. Experiencing this attack is less likely when you deploy and correctly configure the IPv6 protocol, and connect to other networks and nodes on which it is similarly configured. For example, you should enable IPv6 routing, but also include DNS information and network security policies
## Are there any challenges with IPv6 that I should be aware of before starting to make the switch?
Great question! Lets address each of the stumbling blocks in turn.
### Long, multipart hexadecimal numbers
Since they are very long, IPv6 addresses are less memorable than IPv4 ones.
However, this has been alleviated using a built-in abbreviation standard. Here are the general principles:
- Dropping any leadings zeros in a group
- Replacing a group of all zeros with a single zero
- Replacing continuous zeros with a double colon
Though this might take a moment to memorize, familiarity comes through use.
### Handling firewalls in IPv6
With IPv4, the deployment of Network Address Translation (NAT) enables system administrators in larger enterprises, with hundreds or thousands of connected and online devices, to provide a sense of security. Devices with private IP addresses are displayed to the public internet via NAT firewalls and routers that mask those private addresses behind a single, public one.
- This helps to keep organizations IP addresses, devices, and networks hidden and secure.
- Hiding the private IP address discourages malicious attacks that would attempt to target an individual IP address.
This lack of the need for a huge number of public IPv4 addresses offered by NAT has additional benefits for sysadmins:
- Helping to manage the central problem of the limited number of available IPv4 addresses
- Allowing for flexibility in how you build and configure your network, without having to change IP addresses of internal nodes
- Limiting the admin burden of assigning and managing IP addresses, particularly if you manage a large number of devices across networks
### Firewall filter rules
It is difficult for some to move away from this secure and familiar setup. When it comes to IPv6 however, NAT is not deployed. This might prove to be a concern, if you are used to relying on NAT to provide a layer of security across your network.
Instead, while a firewall is still one of the default protective mechanisms, system administrators must deploy filter rules in place of NAT.
- In your router, youll be able to add both IPv4 and IPv6 values—with many device vendors now enabling it by default.
- Then, if youve also configured filtering rules, when packets encounter the router, theyll meet any firewall filter rules. The filter rule will check if the packet header matches the rules filtering condition, including IP information.
- If it does, the Filter Action will be deployed
- If not, the packet simply proceeds to the next rule
If you configure filtering on your router, dont forget to also enable IPv6 there, on your other devices, and on your ISP.
## Have you deployed IPv6 addresses to tackle address exhaustion?
Yes, it is true that there is still a way to go before IPv6 is adopted worldwide, as we discussed above. However, as the pace of innovative technologies, solutions, and platforms continues, we predict this will simply become one more common instrument in our tool bag.
Wed be very interested to know what you think of the IPv6 protocol, whether youve already converted and how you found the process. Do you have any ongoing challenges?
Join the Authentik Security community on [Github](https://github.com/goauthentik/authentik) or [Discord](https://discord.com/invite/jg33eMhnj6), or send us an email at hello@goauthentik.io. We look forward to hearing from you.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 710 KiB

View File

@ -1,112 +0,0 @@
---
title: "Happy Birthday to Us!"
description: "We are celebrating our one-year anniversary since the founding of Authentik Security.."
slug: 2023-11-1-happy-birthday-to-us
authors:
- name: Jens Langhammer and the authentik team
url: https://goauthentik.io
# image_url: https://github.com/goauthentik/authentik/main/website/static/img/icon.png
tags:
- startups
- founders
- building a team
- SSO
- security
- identity provider
- authentication
hide_table_of_contents: false
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
Even though we are shouting _Happy Birthday to Us_, we want to start by saying:
> Thank You to you all, our users and supporters and contributors, our questioners and testers!
We simply would not be here, celebrating our 1-year mark, without your past and present support. While there are only 7 employees at Authentik Security, we know that our flagship product, [authentik](https://goauthentik.io/), has a much bigger team... you all! Our contributors and fellow builders and users are on the same team that took us this far, and we look forward to continuing the journey with you to build our amazing authentication platform on authentik!
!["Photo by <a href="https://unsplash.com/@montatip?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">montatip lilitsanong</a> on <a href="https://unsplash.com/photos/chocolate-cake-with-cherry-on-top-eOcKHriNVk4?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>"](./image1.jpg)
<!--truncate-->
### The backstory
Our CTO, [Jens Langhammer](https://www.linkedin.com/in/beryju), began coding authentik in 2018, with the first commit on November 11. By October of 2021 there was already excitement around the project, much of it on Reddit, not your usual suspect for open source news. The enthusiasm about the SSO project caught eyes in the ecosystem.
The initial emails about building a company happened in April 2022, when [Open Core Ventures](https://opencoreventures.com/) approached Jens and expressed interest in supporting his open source project with funding and operational guidance. A matter of months later, and some hard thinking by Jens, the dotted lines were signed, the funding was there, and in November of 2022 Authentik Security was founded.
There are hundreds of thousands of open source projects out there; to have authentik selected, and deemed robust and useful enough to receive backing and support, with an opportunity to turn it into a proper company with the resources needed to keep building new features, was a remarkable opportunity.
Sure, building a community is an exciting opportunity, but it's also a slightly terrifying one. Those of us who work in open source ecosystems understand well how important it is to simultaneously demonstrate steady growth and dedication to the project, a willingness to take risks, and above all, value. Building software is almost always fun; building software that solves problems is also really hard work.
> Fast forward a year (and it WAS fast!)…
### A year flies when youre having fun
A lot happens in a year. This week we are celebrating our 1st full year as an incorporated company. The past year was focused on Jens settling into his role as CTO, hiring the team, pulling us all together to keep releasing new features, and learning the joy of pre-sales work and calls with customers. (Hint: hed rather be coding!)
Once you get to know Jens, you wont be surprised to his answer about what he was most looked forward to about building up a team and a company and further building out the product:
- Building [even more] cool features that he didnt have the time to do all himself, and hiring professionals to do specialized work.
- Building something that outlasts the builder… something useful to the world, working with other founders, and taking a project to a product to a software staple.
**Building a new team from scratch**
That task alone will scare most of us. In software, team work is most definitely what makes the dream work, so finding the right talents and skills sets and experiences to compliment Jens deep technical skills and full-stack experience was of paramount importance. We now have developers with expertise in frontend and backend development, infrastructure, and security, a well as a content editor.
Of course, it is not just the technical skills that a potential new hire needs; as important are less-measurable skills like collaboration, communication, and perhaps most importantly, what we call "technical curiosity".
> How does this thing work, from whom can I learn more, and with whom can I share my knowledge?
We have that team now, and are grateful for it. Celebrating the one-year mark of Authentik Security means a lot to us!
**Keep those PRs merging**
Keeping new functionality rolling out (and keeping up with Issues and PRs in our repository) never slowed down much, even during the period of incorporating as a company and building a team. Support for new providers, becoming [OpenID certified](https://goauthentik.io/blog/2023-03-07-becoming-openid-certified-why-standards-matter), adding support for [SCIM](https://goauthentik.io/docs/providers/scim/) and [RADIUS](https://goauthentik.io/docs/providers/radius/) protocols, and a [lot more](https://goauthentik.io/docs/releases).
Right at the end of our first year, we released our [Enterprise version](https://goauthentik.io/blog/2023-08-31-announcing-the-authentik-enterprise-release), with dedicated support. And just last week, we rolled out one of the most important capabilities in an identity management platform: [RBAC](https://goauthentik.io/docs/user-group-role/access-control/) (role-based access control).
**New processes, new ideas, and expected growing pains**
With a new team, come new processes. Someone has to decide which emoji to use for which infrastructure task thats completed.
OK, ok, beyond selecting emojis, we also (slowly and deliberately) defined new logical and pragmatic ways to create discrete work tasks and to track work by sprints. This effort went in fits and stops and starts; now we move much more rapidly with our defined tasks and open communication about who is working on what. We are also formalizing our release processes, doubling-down on our CI/CD pipeline and deployment packaging testing, and implementing technical review for all published content.
Increased team size means more ideas, often brought in by someone on the team who gained experience in a certain area on their previous job. For example, some of our happy implementations include moving to ArgoCD (yay for [deploying your PRs](https://dev.to/camptocamp-ops/using-argocd-pull-request-generator-to-review-application-modifications-236e) app modifications in a test environment!), a suggestion from our Infrastructure engineer. As was the decision to move fully to IPv6 (look for an upcoming blog about that soon!). Our frontend developer is busy building the UI layer for new features (RBAC is here!) and as he goes, templatizing our frontend workflows and components. Further expertise in APIs, security, and technical content are part of the team.
We can say that our growing pains havent been too dreadful. Sure, there was the one month when we went back and forth between three tools for tracking work tasks, but… In general, theres nothing that a good conversation and some testing cant solve.
> Perhaps the biggest growing pain is the rest of the team learning how to prevent the founder from working himself into exhaustion. ;-)
### A founders brain and heart
Our team at authentik has a shared love of building things, and that shapes both how we work together and also our product, even how we communicate with our community.
An interesting offset to our shared love of building is the shared sense of humility, of which we get daily doses from Jens.
> To build boldly yet with humility is what sets some founders apart from others.
The tone and espirit of the company is one reason its so meaningful to celebrate our 1-year birthday; we can happily celebrate a hard year of doing things with full, enthusiastic engagement. At authentik, nerdiness is embraced, technical curiosity flourishes, and transparency is a big part of our nature. Speaking of how we communicate with our community, our Discord forum is (in addition to GitHub) an important place where transparency matters. For example, we recently asked our community what they preferred for a release cycle. Based on the answers, we lengthened the release time from from monthly to every two or three months.
Moving from a role of solo creator of an open source project, to being primary maintainer of a popular, growing project, to suddenly being CTO of a company based on that project is a quite a transition. A natural question we wanted to ask Jens is "Whats been the hardest thing about building a company?" His answers:
- "Recognizing and accepting that you dont get to work on only what you want to, 100% of time… "
- "Learning to delegate, learning to let go a bit, trusting others to do it in their way, in the right spirit. Especially letting others get into the code… Ive learned that instead of saying I would not have done it this way, I instead measure the success of the change itself."
### Whats up next?
Going forward, we want to keep our focus on building features and supporting authentication protocols that our users want, but we have also identified several specific goals for this coming year:
- Increase our focus on UX and ease-of-use, templatizing as much as possible of the frontend components, and developing a UI style Guide
- Research and implement new functionality around remote machine access and management
- Defining increasingly robust tests and checks for our CI/CD pipeline and build process
- Implementing even stronger integration and migration testing, both automated and manual
- Spending more time on outreach and learning from our users about what you all want and where we can improve.
This space of security and authentication is a hard space, especially with larger configurations with multiple providers, large user sets to be imported, and the absolute minute-by-minute race against malevolent hackers.
Oh, and then there is that business of actually promoting and selling your product. But, as a team, we are proud of the product and excited to share it with others who need a solid, secure authentication platform.
Thanks for joining us on this celebration of our one-year birthday, and let us know any thoughts you might have. You can send an email to hello@authentik.io, or find us on [GitHub](https://github.com/goauthentik/authentik) or [Discord](https://discord.com/channels/809154715984199690).

View File

@ -1,158 +0,0 @@
---
title: Everyone agrees zero trust is good but no one correctly implements it
description: “Thanks to a few advancements (mainly Wireguard), zero trust will soon go from buzzword to reality.”
slug: 2023-11-15-everyone-agrees-zero-trust-is-good-but
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- authentik
- zero trust
- Wireguard
- NIST
- Okta
- breaches
- SSO
- security
- identity provider
- authentication
hide_table_of_contents: false
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
Buzzwords are the scourge of the tech industry reviled by developers, pushed by vendors, and commanded by executives.
All too often, a buzzword is the first signal of rain ([or worse](https://media.licdn.com/dms/image/C4E12AQGspNcRlqpg0A/article-inline_image-shrink_1000_1488/0/1593238107360?e=1700092800&v=beta&t=SCKZ-7W_R9swJPwEpBB35OsVc0jE093ylcjxFPm6FZc)): Marketers have created a trend; vendors are using the trend to explain why you need to buy their software right now; executives are worried about a problem they didnt know existed before they read that Gartner report; and the downpour rains on developers.
“_Implement zero trust!_”
“_Why arent we shifting left?_”
“_Are we resilient? Well, can we get more resilient?_”
After a while, buzzwords start to look like trojan horses, and the invading army feels like a swarm of tasks that will result in little reward or recognition. Its tempting to retreat to cynicism and to ignore every Term™ that comes your way.
But this can be risky. For better or worse, good ideas inevitably get branded, and if you want to keep up, you need to see past the branding even if it involves stripping away the marketing fluff to see the nugget of an idea within.
Theres no better example of this than zero trust. In this post, well briefly explore the term's history, explain how it became such an untrustworthy buzzword, and argue that thanks to a few advancements (mainly Wireguard), zero trust will soon go from buzzword to reality.
<!--truncate-->
## Zero trust: An idea ahead of its time
Ideas tend to emerge at inconvenient moments.
Sometimes, there are innovators who think so far ahead that people cant keep up (think Van Gogh), and sometimes, everyone understands the idea, but few can implement it sometimes for quite a while.
Zero trust falls into the latter category, and in the past decade, the terms popularity has outpaced its real-world implications.
### A brief history of zero trust
The term “zero trust” originated in Stephen Marshs [1994 doctoral thesis](https://www.cs.stir.ac.uk/~kjt/techreps/pdf/TR133.pdf) “Formalising Trust as a Computational Concept.” The thesis is complex and stretches far beyond sheer computing concerns (Marsh wrote it for his doctoral program in philosophy).
A decade and a half later, John Kindervag revived the term in 2010 while writing articles for Forrester Research. Around the same time, in 2009, Google debuted [BeyondCorp](https://cloud.google.com/beyondcorp), an implementation of numerous zero-trust concepts. This is when we see the emergence of what we know as zero trust today.
Zero trust is simultaneously a critique of the traditional security mindset and a gesture at a new framework.
The argument is that the previous mindset is, essentially, a veneer of strength over a fundamentally brittle defense. Traditional security systems follow a perimeter-based structure and a “trust but verify” philosophy. Users and endpoints within an organizations perimeter are granted implicit trust, meaning that malicious internal actors and stolen credentials can cause significant damage.
If your company has an office, that means a breach can start when people access the network, and if your company is virtual, that means a breach can open as soon as people start logging into things they shouldnt.
The zero trust model instead eliminates implicit trust and, as the name implies, trust altogether. The framework is “zero trust” because it considers trust a vulnerability. In zero trust, all users are authenticated, authorized, and continuously validated before gaining or maintaining access to systems, applications, and data.
In the traditional model, theres one seemingly strong but ultimately brittle barrier; in the zero trust model, trust is never given, and validation is continuous.
### So, why didnt zero trust take off?
Zero trust, when you think about it, is fairly intuitive, and its advantages are clear. Despite that, zero trust didnt take off in 2010.
When the zero trust model emerged, it had clear advantages, and many security experts agreed on its value. But practical realities meant that many organizations couldnt adopt it.
At the time, when many enterprises were still shifting software to the cloud and before remote work became truly normal, many organizations thought perimeter-based security worked well enough. Leaders could read a Forrester paper on zero trust, find it interesting, and agree in theory but not feel compelled to rebuild their entire security system.
Security concerns already suffer from a “But it wont happen to me” effect, and the prospect of making a huge investment for the sake of an abstract benefit (the ROI of _not_ getting a breach, maybe) was hard to calculate.
Vendors didnt make these calculations easier. When it debuted, zero trust was more an abstract idea than a practical methodology, and security vendors did little to clarify things. Most vendors were not ready for zero trust at all, and even those that claimed to be couldnt integrate and interoperate well because the ecosystem wasnt mature yet.
[NIST](https://www.nist.gov/) (National Institute of Standards and Technologies), which published [Zero Trust Architecture in 2020](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207.pdf), agreed, writing, “During the technology survey, it became apparent that no one vendor offers a single solution that will provide zero trust.” They argued, too, that because theres “no single solution” for zero trust, “It is impossible to have a single protocol or framework that enables an enterprise to move to a ZTA.”
Were in an awkward spot. Everyone agrees zero trust is good but few know how to implement it. Vendors have coalesced around zero trust messaging, but few can actually meet the promises on their landing pages. Many companies that claim to be zero trust arent, and many companies that havent thought much about zero trust have almost stumbled into it.
## Zero trust for the “zero trust” buzzword
In the decade after the “zero trust” concept was popularized, adoption proved so difficult that the term began to resemble a nearly meaningless buzzword.
Until NIST defined the term better in their above-mentioned Zero Trust Architecture article in 2020, there was no clear definition. Without clarity, it was hard for any developer, security engineer, or business leader to verify a vendor's claim that their solution was truly zero-trust. (And thats not even considering whether one solution could claim to offer zero trust at all).
Given the hype and the lack of clarity, many vendors, marketers, and “thought leaders” pushed zero-trust products that were, at best, partial solutions. This push created a lot of cynicism amongst developers and security engineers.
As Den Jones, CSO at Banyan Security, [writes](https://www.linkedin.com/pulse/little-reflection-zero-trust-hype-den-jones/?trk=pulse-article_more-articles_related-content-card), “the level of marketing BS,” including frameworks, papers, and more, became overwhelming: “My concern now is that theres an overwhelming amount of information related to zero trust, so much so that people struggle to decipher it into something meaningful, something that actually solves their problems.”
This isnt the first time hype and vendor pitches outpaced reality, but it was particularly troublesome because zero trust, the concept, was too good to dismiss, and zero trust, the products, were too lacking to evaluate and adopt.
The source of the problem is a terminology problem: Zero trust is more like a framework or methodology than a single solution, meaning almost every zero trust vendor is and was exaggerating.
And because zero trust depended on the rise of cloud and SaaS products, it also resembled a parallel paradigm shift that depended on those other shifts and, at the same time, superseded any given product.
The move to SaaS, for example, created a lot of incidental zero trust security just because so many resources and tools moved to the browser and behind login pages. People whod never thought about zero trust effectively implemented zero trust (at least partially) by having employees log in to Jira, Slack, Gmail, AWS, etc., every day.
Zero trust stumbled forward while the term lagged behind. A glance at Google Trends illustrates the narrative.
![graph of Google Trends](./zero-trust-1.png)
Google Trends shows that the search volume for zero trust increased way after the term originated but before the methodology really became practical. And now, search volume is flagging just as the full zero trust model becomes realistic.
## How Wireguard makes zero trust achievable
Wireguard, [started by Jason Donenfield in 2015](https://en.wikipedia.org/wiki/WireGuard), points to a future where zero trust is finally achievable and when the term can exceed its current buzzword nature.
Wireguard, at its most basic, is a simple, fast, modern VPN that uses cutting-edge cryptography to make it more secure than IPsec (the current standard network protocol suite). As the [Wireguard site](https://www.wireguard.com/) says, “It is currently under heavy development, but already it might be regarded as the most secure, easiest to use, and simplest VPN solution in the industry.”
According to [research](https://cybernews.com/what-is-vpn/wireguard-protocol/), WireGuard is about 15% faster than OpenVPN in normal conditions and 56% faster when OpenVPN is using its TCP mode. Numerous VPN providers have adopted the Wireguard protocol, including NordVPN, Surfshark, and IPVanish.
The company that best illustrates Wireguards potential, however, is Tailscale. Tailscale is a VPN service that provides mesh VPNs with remote access and a site-to-site network. If youre frequently on Hacker News, youve probably seen their fantastic technical articles.
![screenshot of search results for Tailscale on Hackernews](./zero-trust-2.png)
In [one of those articles](https://tailscale.com/blog/why-not-why-not-wireguard/), Avery Pennarun, founder of Tailscale, writes, “[Wireguard] is increasingly widely accepted as the future of secure VPN connectivity.” He has three main reasons:
- Wireguard is open source.
- Wireguard can run in a pure software VM and avoid hardware lock-in and bottlenecks.
- Wireguard supports a single cipher suite that is fast and secure but can work with the key exchange mechanisms you want to layer on top.
Unlike the previous era of zero trust-adjacent vendors, the focus is shifting from an all-in-one zero trust solution to protocol-level technologies that enable a range of products that can, together, help companies pursue zero trust.
With Wireguard, for example, vendors can build stateless VPNs that dont require an open, less secure connection. The customers of those vendors can then build multi-hub networks that are much more secure.
## Why its finally time for zero trust
Wireguard is the leading edge cutting the way to zero trust, but a few other shifts are making the movement more necessary and practical.
NIST, mentioned above, is removing ambiguity around zero trust and providing [clear guidance](https://www.nccoe.nist.gov/projects/implementing-zero-trust-architecture). As companies shop for vendors purporting to offer or support zero trust solutions, they can rely on this guidance to question vendors, and vendors can use the guidance to clarify their positions.
Big institutions, such as the United States Federal government, are [pushing zero trust](https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/). In an executive order, for example, the White House wrote that the Federal government needed to “advance toward Zero Trust Architecture” and that it would “develop security principles governing Cloud Service Providers (CSPs) for incorporation into agency modernization efforts.”
Vendors are also catching up. With Tailscale, for example, companies can [build a zero trust architecture over time](https://tailscale.com/blog/how-tailscale-works/) instead of lifting and shifting their entire security infrastructure. Curious companies can now pursue that curiosity bit by bit.
Zero trust arose because of a few macro trends, as we covered above, but the key onescloud and SaaShave only become more dominant and more undeniable. Security isnt always the fastest-moving field, especially among enterprises, but as more companies see success, even more companies will follow.
Finally, different organizations are starting to reclaim zero trust, translating it from a buzzword to an organizing principle. Zero trust is returning to its roots, again becoming an architecture that organizations build and assemble, not a single purchase.
For example, in a post about [building software for a zero trust world](https://blog.palantir.com/building-software-for-a-zero-trust-world-61d440e5976e), Palantir writes that “Palantir is continuously looking for innovative ways to extend the Zero Trust paradigm, even if that requires radically re-thinking our infrastructure.” Zero trust isnt a solution to be adopted but a paradigm to be pursued.
## Incident by incident, zero trust will become inevitable
Even still, the achievement of zero trust is likely to lag as organizations continue to rely on “good enough” security practices. Now that many of the zero trust pieces are in place, however, adoption will rise more and more steeply as more and more incidents demonstrate what zero trust could have prevented.
For example, code search provider Sourcegraph recently [leaked tokens with long-lasting high-permission access](https://goauthentik.io/blog/2023-08-11-sourcegraph-security-incident). Attackers relied on the implicit trust these tokens granted, but a zero trust model wouldnt have allowed for implicit trust at all.
In another example, a breach at Okta (not [that one](https://goauthentik.io/blog/2023-01-24-saas-should-not-be-the-default); [this one](https://goauthentik.io/blog/2023-10-23-another-okta-breach)) proved the limits of a more ramshackle zero trust approach. The breach was embarrassing for Okta (primarily because several clients, including BeyondTrust and Cloudflare, noticed it first and alerted Okta), but Okta did manage to prevent a worse breach. As we wrote, “one layer of prevention succeeded when the hacker attempted to access the main internal Okta dashboard, but because Okta still views dashboard access as a new sign-in, it prompted for MFA, thus thwarting the log-in attempt.”
The two types of breaches above will drive further interest in zero trust. On the one hand, we see companies fail because they are too trusting; on the other hand, we see other companies fail in some ways but prevent further damage thanks to a few solid elements in their security postures. The potential becomes clear if companies embrace zero trust, they can do even better.
As always, we look forward to hearing your thoughts! Send us an email at hello@goauthentik.io, or join us on on [Github](https://github.com/goauthentik/authentik) or [Discord](https://discord.com/invite/jg33eMhnj6).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 231 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.5 KiB

View File

@ -1,152 +0,0 @@
---
title: Building an OSS security stack with Loki, Wazuh, and CodeQL to save $100k
description: “You dont have to spend a lot developing a good security posture from the beginning. Heres how we built Authentik Securitys stack with mostly free and open source tools.”
slug: 2023-11-22-how-we-saved-over-100k
authors:
- name: authentik Security Team
url: https://goauthentik.io
image_url: ./icon.png
tags:
- authentik
- FOSS
- security budget
- security stack
- Red Team
- Blue Team
- SBOM
- hardening
- penetration testing
- monitoring
- SSO
- insider threats
- certifications
- security
- identity provider
- authentication
hide_table_of_contents: false
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
There was an article recently about nearly 20 well-known startups [first 10 hires](https://www.lennysnewsletter.com/p/hiring-your-early-team-b2b)—security engineers didnt feature at all. Our third hire at Authentik Security was a security engineer so we might be biased, but even startups without the resources for a full-time security hire should have someone on your founding team wearing the security hat, so you get started on the right foot.
As security departments are cost centers (not revenue generators) its not unusual for startups to take a tightwad mentality with security. The good news is that you dont need a big budget to have a good security posture. There are plenty of free and open source tools at your disposal, and a lot of what makes good security is actually organizational practices—many of which dont cost a thing to implement.
> We estimate that using mostly non-commercial security tools saves us approximately $100,000 annually, and the end-result is a robust stack of security tools and processes.
Heres how we built out our security stack and processes using mostly free and open source software (FOSS).
<!--truncate-->
## Blue Team efforts
Security efforts can mostly be grouped into two categories: Blue Team and Red Team. Your Blue Team is defensive, meaning guarding against potential attacks. The Red Team is offensive, actively seeking for weaknesses and potential vulnerabilities. Startups with scant resources should focus on Blue activities first.
### Visibility: Do you know what is happening in your environment?
The first step is to get eyes into your environment through SIEM (Security Information Event Monitoring). A security persons worst nightmare is things happening without them knowing about it. You cant react to an attack that you dont know is happening! You need a tool that monitors your teams device logs and flags suspicious activity.
Were an all-remote and globally distributed team, which makes monitoring both harder and more important; team members can log in from anywhere, at any time, and we dont have a central headquarters to house a secure server for backups, for example. We needed something thats available worldwide and compatible with our endpoint device architectures, cloud infrastructure, and SaaS solutions.
We settled on [Wazuh](https://wazuh.com/platform/siem/), which has been around for a long time, is open source and well supported. Well acknowledge that it is a bit harder to deploy than some other, proprietary solutions. This can often be the case with FOSS, and its a tradeoff you have to accept when youre not paying for something.
If you dont want to use something thats tricky to stand up, you can of course pay for a tool with which youll get customer support and all those good things. Your first priority should be picking something that fits your companys needs.
We also use Grafanas [Loki](https://grafana.com/oss/loki/) (which is free for self-hosted environments) for certain types of log aggregation. Logging is still a staple for security awareness, so do your research for the best logging and analysis solution.
The general idea behind having good visibility is to gather as many data points as possible while minimizing ongoing maintenance overhead. Make no mistake, this step is not only crucial, but never-ending. Companies are always standing up and tearing down infrastructure, on- and off-boarding employees, etc. Without visibility and monitoring of these activities, its easy to leave something exposed to opportunistic attackers.
### Understand your dependencies: SBOMs for the win
If youre a small, early-stage startup, youre more likely to get caught in a large-scale, net-casting campaign than in any sophisticated, targeted attacks. That means its critical to have awareness of your dependencies, so you can quickly understand if a critical vulnerability affects any part of your software supply chain. When the [Log4Shell vulnerability](https://theconversation.com/what-is-log4j-a-cybersecurity-expert-explains-the-latest-internet-vulnerability-how-bad-it-is-and-whats-at-stake-173896) surfaced in December 2021, the companies that were aware of their dependencies were able to mitigate quickly and close the attack window.
This is where a Software Bill of Materials (SBOM) comes in handy. Your SBOM isnt just a checkbox exercise for auditing and compliance requirements. We use OWASPs [Dependency Track](https://dependencytrack.org/) (also free and open source) to ingest our SBOM and help identify parts of the codebase that may be at risk from new vulnerabilities. We also use [Semgrep](https://semgrep.dev/) for code scanning with pattern-based recognition. Its open source and free to run locally.
Its also worth mentioning that if your companys product is open source, or you have an open core model (a proprietary product built on open source), you may qualify for access to free tooling from GitHub for your open source project: we use [Dependabot](https://github.com/dependabot) for automated dependency updates and [CodeQL](https://codeql.github.com/) for code analysis to identify vulnerable code.
### Hardening
Now that youve got visibility into your environment, your next step is hardening: reducing or eliminating potential threats. We can group these efforts into two categories: _organizational security_ and _product security_.
#### Organizational security
Raise your hand if youve worked at a small startup and have seen the following:
- Shared credentials
- Spreadsheets for IT/People teams to create all logins for new employees on the day they join
- Team members introducing new software/tooling at whim
It can be a free-for-all at small companies, and while the risk is low at that scale, it can be much harder to introduce more rigorous processes later. The team will be resistant because youve added friction where there wasnt before.
Ideally, you want to introduce secure-by-default practices into your team and company early on:
- Multi-factor authentication
- Single sign on
- Just-in-time permissions
- Evaluation of new tooling
In the case of open source software, you can inspect the code to check how data is being handled, how secure the databases are, what exact kind of data is being transferred, saved, etc. Another team best practice is around vetting the tools and dependencies that the team uses; even if you dont have time or resources to do a full vet of every new piece of software your coworkers want to use, at least check for certifications.
Here at Authentik Security, we tackle a lot of risk factors with one shot: [authentik](https://goauthentik.io/). By using SSO, we can ensure every new employee has the correct credentials for accessing the appropriate workplace apps, and that every departing employee immediately has access revoked with one click. We can also quarantine suspect users, essentially cutting off access to all systems quickly. Ironically, one of the most common initial access points is ex-employee credentials.
These all contribute to defense in depth—adding layers of security and complications to make it as hard or annoying as possible for attackers to get around. These practices typically cost $0 to implement and will set you up for good security posture as you grow.
#### Product security
This layer is really anything to do with securing the actual product youre building (not your company). This typically means getting third-party penetration testing (if you dont have a dedicated Red Team—more on this below) and remediating vulnerabilities youve surfaced through your monitoring and dependency tracking efforts.
## Red Team efforts
As we mentioned above, the Red Team is offensive, meaning they attack the company (physically or remotely) to poke holes in your own defenses before the real bad actors can.
### Internal penetration testing
Now that we have implemented monitoring, and hardened a few things, its time to test how well we did. This is where we take the attackers point of view to try to break in and test our own controls over our systems, to expose weaknesses. Just recently we discovered that Authentik had a bunch of domains that wed left open, unmonitored. Its a constant, iterative loop of unearthing holes via your internal penetration testing (also called pentesting or white box testing) and finding ways to plug them.
There are a lot of tools to choose from here (everyone likes breaking into things!). Youre never done choosing your stack—the threat landscape evolves constantly and so does the tooling to keep up with it. Youll want to pay attention to new developments by keeping an eye on discussions on Twitter, Reddit, Hacker News, etc. When a new way to attack something develops (and it always will), someone will go create the special automation tooling to address that threat. (Then your attackers are going to go grab that tool and see if they can hack their way in. Its a constant wheel.)
At Authentik we use the [Kali Linux](https://www.kali.org/) distribution, which has a host of hacking tools on it, for penetration testing. Its well known within the security world and is open source and free to use.
Testing can be a tough one for small startups, because you likely wont have a dedicated Red Team and commercial pentesting doesnt come cheap. If you can save on your tooling though, that can help to free up resources for contracting out this type of work. The main goal youre after is trying to identify the low-hanging fruit that inexperienced actors may exploit.
### A note on insider threats
[Okta has been in the news](https://goauthentik.io/blog/2023-10-23-another-okta-breach) (again!) after its second major breach in two years. A team member [unknowingly uploaded a file containing sensitive information to Oktas support management system](https://www.crn.com/news/security/okta-faces-potential-for-reputational-risk-after-second-major-breach-in-two-years-analysts), highlighting the risk of insider threats.
Your employees are a risk factor—whether through malice, ignorance, or carelessness. Its not unheard of for someone to accidentally save a password publicly to the companys cloud. It can be an honest mistake, but its very-low hanging fruit for a bad actor just watching your cloud assets.
With the rise of Ransomware as a Service, theres also always the possibility that a disgruntled employee can act as an initial access broker: either accidentally or purposefully giving their credentials or their access to someone else. Its obviously not possible to prevent all possible compromises, so its important that your tooling is set up to alert you to unusual activity and your processes are in place so you can react quickly.
## Do you really need certifications?
Apart from using security certifications like ISO/IEC 27001 and SOC 2 to evaluate vendors that make the software you are using, certifications can vouch for your organizational security, which might be important to your customers, depending on what your product does and who your customers are.
For us at Authentik Security, [our source code](https://github.com/goauthentik/authentik) is available for inspection, but that doesnt tell people anything about how we handle emails, payment information, and so on. Thats where a third-party certification comes in: an auditor verifies your security practices, which in turn signals to your customers that you can be trusted.
Certifications can be expensive though, and as a cash-strapped startup, you may not want or be able to invest in a certification. However theres nothing stopping you from ingraining some of those good security practices in your companys culture anyway. That way, youre already building a strong security posture and when the time comes, youre not rushing to implement processes that feel unnatural to the team.
Again, it comes back to getting off on the right foot so that youre not spending 10-20x the amount of money later in people time and resources to course correct later.
## Security doesnt have to be a big-company luxury
People imagine that large corporations have security all figured out, but a large security department doesnt guarantee that they have any idea what other teams are doing. As a small company, you do have one thing going for you: its much easier to have eyes on everything thats happening. Youre more tightly knit and you can encompass more with fewer resources.
If you talk to a lot of security people, their happy place is when no one is doing anything. Then your jobs pretty easy. Unfortunately, if you want your company to succeed, you need your developers to develop, your salespeople to talk to prospects, your CEO to meet with whomever they need to meet with. These are standard operations that all put the company at risk, but its your job to mitigate that risk the best you can.
Our security engineer likes to say they work alongside teams, not blocking them. If security says its their job to make sure there are no vulnerabilities, and its the development teams job to make new features, how do you get these two sides to work together?
Realistically, everything has vulnerabilities. Youre never going to have a completely safe, locked-down environment. So, you partner with other teams and find a compromise. Establish a minimum threshold people have to meet to keep going. If youre too inflexible, those teams wont want to work with you and they wont tell you when theyre making new virtual machines or writing new code.
## Repercussions
You dont need to be a security company for these things to matter. This advice applies no matter what type of product youre building.
[Some 422 million individuals were impacted by data compromises in 2022](https://www.statista.com/statistics/273550/data-breaches-recorded-in-the-united-states-by-number-of-breaches-and-records-exposed/). As consumers we have almost become numb to news of new breaches. A company gets breached, they offer some sort of credit protection, cyber insurance might go up a bit, but life goes on.
If youre still not motivated to invest in your security posture (or trying to win over teammates who prioritize feature shipping over everything), consider the [case of SolarWinds](https://www.sec.gov/news/press-release/2023-227). The company appears to have exaggerated their internal security posture, leading to an indictment from the SEC.
So not only is security important, it could actually keep you out of jail.
_Whats in your security stack? Let us know in the comments, or send us an email at hello@goauthentik.io!_

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

View File

@ -1,163 +0,0 @@
---
title: Automated security versus the security mindset
description: "Automated security plays a key part in many cybersecurity tasks. But what are its failings and will a security mindset always require the human factor?"
slug: 2023-11-30-automated-security-versus-the-security-mindset
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- authentik
- automated security
- security mindset
- incident response
- vulnerabilities
- human factor in cybersecurity
- SSO
- identity provider
- authentication
- Authentik Security
hide_table_of_contents: false
image: ./authentication.png
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
Automation plays a large and increasingly important role in cybersecurity. Cybersecurity vendors promote their Machine Learning and Artificial Intelligence products as the inevitable future. However, thanks to the work of security experts like [Bruce Schneier](https://en.wikipedia.org/wiki/Bruce_Schneier), we have more insight into the human adversaries that create the underlying risks to network security, and a better understanding of why teaching humans to have a security mindset is the critical first step to keeping your network safe.
> The best response to these malicious actors is to think like a security expert and develop the security mindset.
In this blog post, we examine why automation is such a popular solution to cybersecurity problems—from vulnerability scanning to risk assessments. Then, we will look at those tasks in which security automation by itself proves inadequate, with particular focus on automatic scanning. Next, we make a positive case for why the human factor will always be needed in security. Finally, we will propose that good security isn't a feature. It's a proactive security mindset that's required—one with a human element at its core.
![authentik UI](./authentication.png)
<!--truncate-->
## Why automate security in the first place?
Automated security is such a popular option purely because of the current dynamics:
- On the one hand, there is a growing number of security incidents, instigated by systematic threat actors who may use the exact same auto security testing tools to find and target weaknesses
- On the other, there is a shortage of trained cybersecurity professionals with adequate time resources to deal with those threats
Meanwhile, companies concerned about the security of their networks are facing the demands of savvy insurers keen to reduce their risks, while CISOs are coming under increasing personal pressure, considering some have faced new warnings of personal liabilities (including jail time, as we wrote about in a [recent blog](https://goauthentik.io/blog/2023-11-22-how-we-saved-over-100k#repercussions)) from government legislators.
But it's not just a personnel problem. The nature of some cybersecurity approaches, such as penetration testing, also plays a part. Many of a security engineers tasks are repetitive and prolonged. Automated security testing means time can be freed up to make the best use of an internal security engineer or external pentester's resources.
Finally, it is impossible to deny that securing the perimeter (running regular scans for misconfigurations and unusual behavior) and enforcing robust security policies are all impossible to deploy without some automation. 24/7/365 monitoring, processing massive data sets, and rapid threat detection and remediation all call for significant automated elements. Automated security is also key in helping scale cybersecurity operations to match company, staffing, system, and platform growth.
### What is the role of automation in security tasks?
Lets not throw the baby out with the bath water. Automation has a place and positive role to play in cybersecurity. Auto security testing tools are best deployed for tasks that are repetitive and routine, and that require high volume processing.
Examples of these tasks include:
- Scheduled tasks such as vulnerability scanning
- 24/7 user and other activity monitoring
- Actions that require speed such as detecting and immediately responding to malicious intrusions
Removing tasks like these from the manual operations of your SOC (security operations center) aids efficiency, supports your security team, and helps ameliorate any skills shortage.
What are the benefits of an automated security system?
Automated security also excels in:
- Reducing human error
- Eliminating manual steps
- Lowering the number of false positives
- Updating software
- Helping with compliance
- Enhancing incident response and threat intelligence
## Why automation is a threat to cybersecurity
If automation is such a popular and necessary asset in the cybersecurity field, why can't we automate everything?
_Lets think: Could over-reliance on automated security testing ultimately prove detrimental to cybersecurity and threaten the safety of your systems?_
To help avoid this, we need to acknowledge that automation can't:
- Keep security teams up to date with new standards, such as the NIST Cybersecurity Framework; the ISO/IEC 27001 standard for information security management; the CIS Critical Security Controls; the OSSTMM; the Web Application Security Consortium (WASC 2.0); or the finance standard of PCI Data Security Standards for the payment card industry
- Adjust your internal security policies and practices to all the nuances of relevant industry, country or regulations such as NIST SP 800-52; The California Consumer Privacy Act: the Canadian PIPEDA; the EUs GDPR; or HIPAAs personal health data legislation
- Rapidly respond to every new CVE or every item that makes an appearance in the SANS Top 25, or the most common vulnerabilities listed in the OWASP lists
- Ensure that your own internal cybersecurity protocols and policies are enforced
_But what else?_
The first point to remember is that automated solutions can only reliably alert and respond to the threats to your network, services, databases, APIs, and applications that they've been configured to detect. This configuration is limited to the settings available in the particular software. Automated processes are only as good as the rules human engineers give them. Security processes must still be configured and employed correctly.
And, your own companys internal business logic must be factored in. This is where pentesters (who may, of course, rely on some automated tools to help them identify some vulnerabilities across your network) can delve deep on specific vulnerabilities and apply your companys custom business logic and data breach implications. Resultant summary reports must explain the business, financial, reputational, data, and user implications of likely breaches, investigations and penalties.
Also, malicious hackers can use automated security techniques just as much as defenders to find potential security flaws in an organizations network. They use novel attacks inspired by vulnerabilities that automated tools are unable to detect at all, by exploiting mistakes made by users that automation by itself can't solve. Examples include social engineering attacks that can begin with an innocuous looking email, or an SMS or email phishing scam. Given that over 80% of bad actors gain illegitimate entry using social engineering attacks, it is obvious that company-wide staff training is an excellent deployment of resources.
## Against automatic scanning in favor of a proactive security mindset
In the case of social engineering attacks that weve just mentioned, a security-oriented mindset is what will keep your staff watchful—not the knowledge of automated tools.
_Could mindset, then, be the greatest weapon in your defensive arsenal? Lets explore further._
### What elements are crucial to a security mindset?
Despite the advantages of automation in security scanning, the element of human expertise is needed in many steps of the scanning process. There is no purely automatic way to proactively identify all new threats and preempt sophisticated or unconventional attacks, for example. Security engineers must wait for the tools on which they rely to be updated with the latest CVEs, and they must then have the expertise to understand the reasons and logic behind threats. They will sometimes have to manually validate these threats where required, then plan mitigation activities for future avoidance.
Further, it is the practice and discipline of working in cybersecurity that give developers the mindset and expertise to build software that is secure by design. We expect vulnerabilities, and we write more secure code because of it.
### Some of the drawbacks of vulnerability scanning tools
While automated scanning tools can provide a major asset in the arsenal of any cybersecurity professional, we must honestly acknowledge their weaknesses when set side-by-side with a human:
- An automated scanner can miss vulnerabilities if they are new and not in its database, or if the vulnerability is complex and adaptive. Scanners can only hunt for known vulnerabilities, and according to how automated scans are further configured by users.
- The problem of false positives can never be completely eliminated even by the most accurate scanners. In the end, a human expert is needed to filter them out.
- Detecting vulnerabilities is only the start. While some scanners assign an urgent priority to their findings, human expertise is needed to assess the _specific_ implications of these vulnerabilities for the platform, system or business.
- Once vulnerabilities are detected, fixing and patching them is a manual process. A vulnerability report is a starting point. Identifying a vulnerability is one thing; successfully remediating it is another. Further, security engineers will sometimes also have to further reengineer their code, to ensure a similar problem does not recur.
Of course, automatic scanners are excellent assets for speed and quick action, repeatability, ease of use, and constant monitoring. They can provide a good starting point for further investigations, not an end point. But they are not equivalent to a full penetration test and can only find risks that are known.
### What about AI in automated security scanning?
AI and machine learning contribute to the speed and accuracy of dealing with risks posed by known threats. But, for all these advances, the mind of the security engineer is still required when dealing with unknown or new threats, threats that are chaotic and unpredictable or morphable, and threats that don't follow the rules.
## The human factor in cybersecurity will always be required
The fact that there are automated tasks and processes in cybersecurity does not mean that the good security as a whole is autonomous or automatic. Security is more about developing a _security mindset_ than a set of features.
For further information on the human element in SaaS security, see [Securing the future of SaaS: Enterprise Security and Single Sign-On](https://goauthentik.io/blog/2023-07-28-securing-the-future-of-saas#good-security-cant-be-automated-the-human-element-in-saas-security).
### Human cyber risk
Humans are at the forefront of cybercrime. Cyber crimes are committed by human beings using adaptation and innovation to invent fresh attack tactics. It is the human mind that continually develops new techniques to hack, infiltrate, and bypass security systems.
For example, if your company does not have a 2FA/MFA credential policy, vulnerabilities exist around whether your staff share user credentials to save them time and stress. If these credentials are not updated regularly, or worse, if theyre shared by email, any moderately skilled, malicious hacker could attempt to access the email account of a single user,  and use it to find other company passwords. It is these human weaknesses and errors that most bad actors hackers rely on.
_Over 80% of malicious hacks are as a result of the exploitation of the widest weakness of all—predictable human behavior._
### Human elements of cybersecurity
Even in a cybersecurity system that is maximally automated there is human input that can never be removed. Obviously, human experts are needed to guide the automated systems in their functioning. Automation technology depends on humans to set rules and workflows, monitor results over time, and rapidly prioritize then respond to alarming findings.
Once new and significant threats are detected by the automated security, it is human experts again who have to adjust the performance of the automated system as a response to this changing environment. Any further changes need humans to evaluate the performance of automated systems in real-time. Finally, it is humans who train staff in cyber threat detection for these new dangers.
### Human-centered cybersecurity
Despite the growing technology around automated security, and the temptation to relax when it is deployed, there are human factors that are irreplaceable in the practice of cybersecurity. We recently wrote about the importance of the "Blue Team" and how [organizational and product hardening](https://goauthentik.io/blog/2023-11-22-how-we-saved-over-100k#hardening) are an integral part of our human-centered security mindset.
- The human ability to think creatively and rapidly adapt to changing situations is invaluable to good security processes.
- The higher the security risk, the more you need skilled security professionals to supervise the security process.
- After automation has quickly gathered information, humans are needed to make any well-informed security and organizational decisions that may arise.
- Exclusively human tasks include containment, triage, remediation, and launching new initiatives such as better responses (see [Okta got breached again and they still have not learned their lesson](https://goauthentik.io/blog/2023-10-23-another-okta-breach)).
- Only humans can know the commercial implications of a data breach.
## The security mindset is not a feature
One misconception is that for every cybersecurity problem or threat, there is an automated feature in some software somewhere that can match it.
> Some cybersecurity software plans seem to promote feature-rich products but forget to promote highly skilled and aware cybersecurity teams with a proactive security mindset.
Companies have become too dependent on automation, due to the overwhelming volume of threats and frequency of attacks. This overreliance can cause all sorts of unintended problems—alert fatigue, data overload, devaluing human expertise and input, and an inability to handle zero-day (previously unknown) vulnerabilities.
Automated security platforms and measures assist and augment human expertise; they do not replace or supersede it. If their corresponding strengths and shortfalls are properly acknowledged, automation and teams with a healthily skeptical security mindset can collaborate for success.
Let us know if you'd like to learn more about how authentik works as a primary component in a security stack. You can send an email to hello@authentik.io, or find us on [GitHub](https://github.com/goauthentik/authentik) or [Discord](https://discord.com/channels/809154715984199690).

View File

@ -1,93 +0,0 @@
---
title: "Okta's October breach part two: a delayed but slightly better response"
description: "Okta continues to revel more information about the HAR files breach first revealed in October; now we know that a service account was involved, and 100% of their customer support users were impacted."
slug: 2023-12-12-oktas-october-breach-part-two
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- authentik
- security mindset
- incident response
- service account
- Okta
- SSO
- HAR files
- identity provider
- authentication
- Authentik Security
hide_table_of_contents: false
image: ./okta-timeline.png
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
On November 29th, 2023, Okta [revealed](https://sec.okta.com/harfiles) that a breach they announced in October was much worse than originally conveyed. The number of impacted users went from less than 1% of customers to every single customer who had every opened a Support ticket in the Okta Help Center.
> So the impact leapt from [134 users](https://sec.okta.com/articles/2023/11/unauthorized-access-oktas-support-case-management-system-root-cause) to [18,400 users](https://www.beyondtrust.com/blog/entry/okta-support-unit-breach-update).
We wrote in October about Oktas poor response to breaches (see [Okta got breached again](https://goauthentik.io/blog/2023-10-23-another-okta-breach)), but since our blog doesnt seem to be changing Oktas behaviour, lets take a closer look at the new revelations from Okta about what happened back in October, how it is impacting users now, and why Okta is still dealing with it in December.
> Now all of Oktas customers are paying the price… with increased phishing and spam.
Our take is that any company can be hacked, but it is the response that matters. How quick is the response, how transparent are the details, how forthright are the acknowledgments? Oktas initial announcement about the October breach (remember the [HAR file](https://goauthentik.io/blog/2023-10-23-another-okta-breach) that contained a session token?) was less-than-timely, devoid of details, and titled with one of the worst titles ever given such a serious announcement.
![screenshot of the timeline that Okta published](./okta-timeline.png)
<!--truncate-->
## Looking back at Octobers breach
With the original incident, probably what most people now recall is not only the technical details of the session tokens that were exposed in HAR files, but also the very slow response time. Turns out 1Password reported the breach to Okta on September 29, and [BeyondTrust](https://www.beyondtrust.com/blog/entry/okta-support-unit-breach) reported the breach to Okta on October 2. But Okta waited three weeks before announcing the breach on October 20th.
In this October 20th announcement, Okta CISO David Bradbury stated that the malicious actor had gained access to Oktas Support dashboard and retrieved only names and emails addresses for a very small number of customers. He explained that the hacker used session tokens that were not scrubbed from a HAR file (which Okta support routinely asks their customers to submit, for troubleshooting purposes) to gain access to specific customers accounts. But what wasnt revealed at the time (because Okta themselves did not yet know) was _how_ the hacker obtained access to the Customer Support dashboard to access customer accounts and then download associated HAR files.
### The second **Okta shoe fell in early November**
As mentioned above, the new information revealed by Okta came from their security team retracing the steps of the original malicious actor. Oktas research and analysis was greatly aided by the fact that [BeyondTrust shared a suspicious IP address with Okta](https://www.beyondtrust.com/blog/entry/okta-support-unit-breach-update); the IP address that BeyondTrust believed to be the hackers.
**Initial access gained via a service account**
This new finding, based on retracing the steps of that IP address, show that the initial breach occurred when the hacker obtained the credentials for a service account "stored in the system", which provided access to the Customer Support dashboard and had permissions to view and update customer support cases. The hacker then reviewed customer support cases, downloaded the associated HAR files, and retrieved the session tokens.
From the [November 3rd announcement](https://sec.okta.com/articles/2023/11/unauthorized-access-oktas-support-case-management-system-root-cause), we now know that the service account credentials were exposed through an Okta employees personal Google account, which the employee had accessed on the Okta-issued work laptop.
In announcing the service accounts role in the breach, Oktas CISO stated:
> "During our investigation into suspicious use of this account, Okta Security identified that an employee had signed-in to their personal Google profile on the Chrome browser of their Okta-managed laptop. The username and password of the service account had been saved into the employees personal Google account."
The use of the service account was discovered on October 16, by examining the activities of the IP address of the hacker that BeyondTrust had supplied Okta. This begs the question of why Okta did not revel the malicious use of the service account in their October 20th announcement. Perhaps they did not yet want to show details of their internal investigation?
### And a third shoe in late November
Now fast-forward to Oktas [November 29th announcement](https://sec.okta.com/harfiles). Back in October, it was known that after the hacker accessed Oktas Support dashboard they ran queries on the support database to create reports containing customer data. In the November 3rd announcement Okta shared that the report was thought to be quite small is scope; this is the infamous "less than 1% of Okta customers" statement.
But after more internal investigation and recreating the reports that the malicious actor ran, Okta announced on November 29th that their original statement that less than 1% of their users were impacted by the October breach was incorrect. Instead, Okta revealed that the scope of the report was much larger, and indeed concluded that "[the report contained a list of all customer support system users](https://sec.okta.com/harfiles)". A total of 18,400 users. The only customers that were not impacted are those in FedRAMP and DoD IL4 environments, who are on a separate support platform.
> An aside, perhaps, but the timing of Oktas update is interesting; the announcement was released on the same date as the quarterly earnings report. This could be seen as transparency, or it could be seen as damage control. (Also, why in the heck is Nov 29th within Oktas 3rd quarter of **fiscal year 2024**? But we arent writing here about Oktas financial schedules; I digress.)
**Filters removed from report**
Apparently when the hacker ran queries to gather customer data, they used a standard template available from the dashboard. However, they removed all filters on the templated report, thus grabbing much more data then the original template would have returned.
In addition to removing all filters on the report, it seems that Oktas original analysis of the logs pertaining to the breach failed to take in to account exactly HOW the hacker accessed and downloaded data:
> "For a period of 14 days, while actively investigating, Okta did not identify suspicious downloads in our logs. When a user opens and views files attached to a support case, a specific log event type and ID is generated tied to that file. If a user instead navigates directly to the Files tab in the customer support system, as the threat actor did in this attack, they will instead generate an **entirely different log event** with a different record ID."
## Now what?
The third shoe has dropped, but it feels like there might still be a fourth. Maybe even more data was stolen, beyond just email addresses. Perhaps the malicious actor gained more sensitive customer data when they were able to log into specific customers accounts, but are sitting on it waiting to use it. Perhaps the explorations by the hacker form within the customer support system reveled other weaknesses that have yet to be exploited.
So while we are all waiting with baited breath to see if Okta can squeeze even more drama into 2023, here are a few tips to consider:
- If you are an Okta customer, do indeed follow each and every one of their recommendations, listed under the "**Implementing recommended best practices**" section of their [November 29th announcement](https://sec.okta.com/harfiles).
- Be aware of Oktas plan for a 90-day pause on new features. During the [earning report call](https://seekingalpha.com/article/4655057-okta-inc-okta-q3-2024-earnings-call-transcript) on November 29th CEO Todd McKinnon stated "During this hyper-focused phase, no other project or even product development area is more important. In fact, the launch dates for the new products and features that we highlighted at Oktane last month will be pushed out approximately 90 days."
- As Okta advises, be on the lookout for more phishing attempts and stay hyper-vigilant.
- In general, across the board, be vigilant and adopt a "[security mindset](https://goauthentik.io/blog/2023-11-30-automated-security-versus-the-security-mindset)" (as valuable and maybe more than any technology).
- Consider breaking out of vendor lock-in and using an on-premise, open core solution such as [authentik](https://goauthentik.io/). We realize change is hard, but continual breaches and uncertainty around when a breach has been fully contained is also painful.
Wed be happy to talk with you more about your security and identity management needs; reach out to us with an email to [hello@goauthentik.io](mailto:hello@goauthentik.io) or on [Discord](https://discord.com/channels/809154715984199690/809154716507963434).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

View File

@ -1,164 +0,0 @@
---
title: "Building the dream infrastructure stack for a security startup: preparing for human and technical scaling"
description: "What's in our stack: the tools we use to build authentik (and why we chose them)."
slug: 2023-12-21-five-lessons-from-choosing-infrastructure-tooling
authors:
- name: Marc Schmitt
title: Infrastructure Engineer at Authentik Security Inc
url: https://github.com/rissson
image_url: https://github.com/rissson.png
- name: Rebecca Dodd
title: Contributing Writer
url: https://www.thebasementoffice.co.uk
image_url: https://github.com/rebeccadee.png
tags:
- authentik
- startups
- infrastructure tooling
- tools
- technology stack
- Loki
- Argo CD
- Prometheus
- Thanos
- Transifex
- Lit
- Redis
- Grafana
- authentication
- Authentik Security
hide_table_of_contents: false
image: ./tech-stack1.png
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
With great power (to choose your own tools) comes great responsibility. Not inheriting a legacy toolchain is an infrastructure engineers dream, but it can be hard to know where to start.
As the first infrastructure engineer hired to work on authentik, I saw the greenfield opportunities, but also the responsibility and long-term importance of choosing the best stack of tools and build processes. From my past roles, I already knew many of the considerations we would need to factor in.
For example, we know that ease of maintenance is a primary consideration, as is the stability and probable longevity of the tool, how well the tools integrate, and of course the level of support we were likely to get for each tool.
In this post we share some of what we are using to build authentik, and the lessons behind those choices.
![technology stack for startups](./tech-stack1.png)
<!--truncate-->
## #1 Choices are often human, not technical
If there isnt much difference between two tools, the choice isnt a technical decision. Its going to come down to human factors like ease of use or the teams familiarity with the tool. This is why we use [GitHub Actions](https://docs.github.com/en/actions) for our CI—[were already on GitHub](https://github.com/goauthentik) so it just makes sense.
> Familiarity with a tool means that you and your team can move faster, leading to higher business efficiency and a happier team.
### We use Argo CD for GitOps
When I joined Authentik Security, we were using [Flux CD](https://fluxcd.io/). Jens, our founder and CTO, had set up a small Kubernetes cluster to run an authentik instance for us to log into different services (some monitoring tools), and he was deploying all of this using Flux CD.
If youre not familiar, Flux and [Argo CD](https://argo-cd.readthedocs.io/en/stable/) enable you to do GitOps: whatever you want to deploy, you push that to a Git repository and then synchronize whatever is in production from that Git repository. Everything is committed and tracked in the Git history (helping you to understand what has changed and why).
You also dont need to do anything manually on the production servers or clusters—its all done in Git. This helps with auditing, as history is tracked, and you can easily find who made a change. You dont need to give access to your production servers and cluster to whoever is conducting the audit, since they can see how everything is configured in the Git repo.
#### Flux and Argo CD essentially do the same thing
Despite Flux and Argo CD both being good at what they do, I advocated for switching to Argo CD because I have always worked with it and that familiarity with the tool meant Id be able to work with much greater efficiency and velocity.
Since switching to Argo CD, weve automated deployment of new pull requests with the `deploy me` label. A developer can add that label to one of their open PRs, and the changes get deployed to a production-like environment so they can test those changes with a real domain and real certificates—its exactly the same as how a client would interact with those changes. Its especially useful for mobile app development because instead of launching an authentik instance locally, you can test the mobile app against a production-like environment. This ability to access this “test deployment” is great for QA, tech writers, technical marketing teams, and anyone else who needs early access to a feature before it even gets merged.
#### Setting us up to scale
Argo CD also comes with a built-in UI, which Flux does not. This is useful because as we grow as a company, we will have more developers and we want to enable self-service and a culture of “you build it, you run it.”
With the Argo CD UI, a developer can make changes in Git, view the changes in the UI, and validate if the application started correctly and if everything is running. Theres no need to build another tool or set up Grafana dashboards or some other solution for developers to check if the application is running correctly.
“You build it, you run it” in this case isnt about operations or infrastructure leaving developers to figure things out on their own. What we actually want is to empower devs to run things themselves so that:
1. Everyone shares the burden of production.
2. Developers have a shorter feedback loop to see how their app behaves in production.
This type of choice is about setting things up for scalability down the road, which leads me to our next lesson.
## #2 Build with scale in mind
Our founder, Jens, has written before about [building apps with scale in mind](https://goauthentik.io/blog/2023-06-13-building-apps-with-scale-in-mind) and [doing things the right way first time](https://goauthentik.io/blog/2023/10/26/you-might-be-doing-containers-wrong/).
As an infrastructure engineer especially, it can be so hard to deal with legacy tools and solutions (sometimes you just want to burn it all down and start over). Its just so much easier to maintain things if you do it properly from the beginning. Part of why I wanted to join Authentik Security was because there wasnt any legacy to deal with!
Yes, premature optimization is the root of all evil, but that doesnt mean you cant think about scalability when designing something. Having a design that can scale up if we need it to, but that can also run with few resources (human or machine)—even if a few compromises are necessary to allow it to do so—is oftentimes better that having a design that wasnt built with scale in mind. This can spare you having to redesign it later (and then on top of that, migrate the old one).
### We use Transifex for translation
Internationalization isnt often high on the list for open source projects or developer tool companies, but weve been doing it with [Transifex](https://www.transifex.com/).
If your users are developers they are probably used to working with tools in English. Whoever administers authentik for a company in France, for example, probably knows enough English to get by. But that companys users may not need to speak English at all in their role because theyre on the legal or finance side. Those users still need to log in using authentik, so its great to be able to provide it in their language.
We use [Lit](https://lit.dev/) for our frontend (Jens has written about [choosing Lit over React](https://goauthentik.io/blog/2023-05-04-i-gambled-against-react-and-lost)), which supports translation by default:
- With Lit, were able to extract strings of text that we want to translate.
- Those strings are sent to Transifex, where we can crowdsource translations.
- We do this by marking strings as “source strings” with just three extra characters per string, which is not that much of an effort if youre doing it from the outset vs implementing afterwards.
Native speakers of a given language can help us polish our translations; this is a great way to enable people to contribute to the project (not everyone can or wants to contribute code, for example).
## #3 Think about product-specific requirements
As a security company, some of our choices are influenced by product- and industry-specific needs.
As youre building your own stack, you may need to think about the requirements of your own product space or industry. You might have customer expectations to meet or compliance requirements, for example.
### We use Redis for reputation data
Most of our storage is done in [PostgreSQL](https://www.postgresql.org/), but for some types of storage we use [Redis](https://redis.io/) for latency reasons, as its much faster to fetch data from.
We have two use cases for Redis:
#### Reputation data
If someone tries to log in and fails, we temporarily store bad reputation weights associated with their IP address in Redis. This enables authentik admins to manage logins more securely; if someone has a reputation of less than a certain threshold (because they tried bad login details a few too many times), the authentik admin can block them.
That data is stored in Redis temporarily; we have a subsequent task that fetches it from Redis and stores it in the database. That way, if you want to keep updating the reputation data for a user (because they keep trying to log in with bad inputs), were just updating Redis and not PostgreSQL every time. Then when that data is moved from Redis to PostgreSQL, its compacted.
#### Session data
This use case is more common: with every request, we check that the user is still logged in or if they need to log in. Again, we store this data in Redis for latency reasons.
# #4 Your choices will cost you, one way or another
Of course, budget is going to play a role in the tools you choose. You have to balance your investments of money OR time spent on your tooling.
### We use Loki for logging
We talked about this in our recent [post about building a security stack with mostly free and open source software](https://goauthentik.io/blog/2023-11-22-how-we-saved-over-100k). As we wrote, [Loki](https://grafana.com/oss/loki/) is free, open source, and cheap to run. We could have gone with something like Elasticsearch (and the whole Elastic Stack) but its so expensive to run in terms of processing power and memory resources. Loki isnt as easy to run, but we save on costs.
> It comes back to the idea of “you either pay in time or money” for software, and for most of authentiks tooling Ive already paid in time for it.
## #5 Optimize for stability and support
Infrastructure people just want to be able to sleep at night and not get paged at all hours (for that we use [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), just in case). I just wanted to set up things that work and are reliable—pragmatic reasons similar to [why we chose Python and Django](https://goauthentik.io/blog/2023-03-16-authentik-on-django-500-slower-to-run-but-200-faster-to-build).
These next tools (all open source) were easy choices because Ive already used them and know how to configure them properly and what pitfalls to watch out for.
### We use Grafana and Prometheus for monitoring and metrics
[Grafana](https://grafana.com/grafana/) is widely used and comes with a lot of existing resources, so we dont have to build everything ourselves. We use it to display our logs and metrics in dashboards so we can view and correlate data from Loki and [Prometheus](https://grafana.com/oss/prometheus/) together. Prometheus scrapes metrics from our infrastructure tools, authentik instances, and other applications.
#### We use Thanos to manage metrics data
I used Thanos for a large-scale deployment in a previous role—storing something like 20TB of metrics per six months—so I knew it would work for us now and later as we scale.
Prometheus stores metrics for ~1 day before [Thanos](https://thanos.io/) fetches them and pushes to S3 storage for longer-term retention. We do this because Prometheus doesnt do well storing large amounts of data. Were also running Prometheus in highly available fashion, which would mean storing the data twice, which would be expensive.
Thanos compresses the metrics data and also does downsampling:
- For 30 days we keep everything thats scraped (every 30/60 seconds)
- Beyond that, for 90 days we keep only a metric point every five minutes
- For a year, we keep just one metric point per hour
By retaining less data as time passes, queries are faster and storage is cheaper. Why keep metrics for such a long time? It gives us a view of the seasonality of traffic so we can do better capacity planning.
## Final thoughts
This isnt a groundbreaking stack. We arent optimizing for the latest edge technology. (The only somewhat controversial choice weve made has been moving to an [IPv6 only](https://goauthentik.io/blog/2023-11-09-IPv6-addresses) network.) For the most part weve gone with options that are tried and tested, well known to the team, and play nicely with the other parts of our stack.
As always, we would be interested in hearing your thoughts about the stack we use, and about the tools that you and your team have chosen and the reasons behind those choices. Send us an email to hello@goauthentik.io or chime in on [Discord](https://discord.com/channels/809154715984199690/809154716507963434).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 995 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

View File

@ -1,201 +0,0 @@
---
title: Open source developers are the original content creators
slug: 2024-02-07-open-source-devs-are-the-original-content-creators
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
- name: Nick Moore
title: Contributing Writer
url: https://nickmoore.me/
image_url: https://nickmoore.me/assets/images/image01.jpg?v=128b1f3c
tags:
- authentik
- access management
- open source
- content creators
- software
- GNU
- identity provider
- authentication
- Authentik Security
hide_table_of_contents: false
image: ./content-creator.png
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
In 2024, Tom Scott and Jynn Nelson, otherwise different people in different worlds, faced very similar problems.
- Tom Scott is a YouTuber who, as of this writing, has gotten nearly 2 billion views across over 700 videos. Nearly 6.5 million people subscribe to Tom Scotts [YouTube channel](https://www.youtube.com/@TomScottGo/videos).
- Jynn Nelson, a senior engineer, is a major maintainer of Rust, an open-source project that 2023 StackOverflow research showed was the [most admired language](https://survey.stackoverflow.co/2023/#productivity-impacts-knowledge-ic) among developers. About [2.2 million people](https://yalantis.com/blog/rust-market-overview/) are Rust developers.
In a [goodbye video](https://youtu.be/7DKv5H5Frt0?feature=shared), Scott announced an extended break from his channel, saying, "I am so tired. There's nothing in my life right now except work.”
In a post called [the rust project has a burnout problem](https://jyn.dev/the-rust-project-has-a-burnout-problem/), Nelson wrote, articulating sentiments across the Rust community, “you want a break, but you have a voice in the back of your head: _the project would be worse without you_.’”
Its unfortunate that this comparison makes the best opening to the point of this post: open source developers are much more like content creators than most people tend to assume.
> If anything, when you look at the history of the Internet and the history of distributing content online, open source developers might be the _original_ content creators.
By looking at the paths they have both paved and recontextualizing their work within a broader view of the creator economy, we can come to a better understanding of the shared futures of content creators and open source developers.
![<a href="https://www.freepik.com/free-photo/content-concept-laptop-screen_2755663.htm#query=content%20creation&position=0&from_view=keyword&track=ais&uuid=875faa67-ef14-4b81-8b12-bcb69973d094">Image by rawpixel.com</a> on Freepik](./content-creator.png)
<!--truncate-->
## Open-source maintainers were creating content before it was cool
In the past decade, a series of similar “economies” have risen and fallen, including the creator economy, the passion economy, and much of web3.
Evan Armstrong captured these collapses well, writing about the [crash of the creator economy](https://every.to/napkin-math/what-happened-to-the-creator-economy) in 2023. “Dollars invested are down 86% to $123M,” he wrote. “Next came the layoffs. The giants of the space have had issues: Patreon laid off 17% of staff, Linktree first sacked 17% of staff, then a few months later another 27%, Cameo has laid off 160 (probably 33%+ of staff).”
But unlike other economies, say, [the paper industry in Maine](https://www.jstor.org/stable/10.7591/j.ctvxkn85v), the factories havent left: influencers are still posting on Instagram, newsletter writers are still growing subscriber numbers on Substack, and TikTok creators are still going viral.
> Its a contradiction with a simple answer: typical conceptualizations of the creator economy are too limited, and the history of content creation is much longer and broader than most thought leaders and investors realize.
### A very brief history of open source
In 1974, software became copyrightable and it quickly shifted from free-by-default to paid. Once companies could control it, closed-source software took off.
Companies enforced copyrights and trademarks and leased the right to use their software. In 1976, Bill Gates wrote an [open letter to hobbyists](https://archive.nytimes.com/www.nytimes.com/library/cyber/surf/072397mind-letter.html), arguing that “most of you steal your software,” and in 1983, IBM stopped distributing source code to people who purchased IBM software.
In reaction to developments like these, Richard Stallman founded the GNU Project in 1983 and the Free Software Foundation in 1985.
He wrote in [The GNU Manifesto](https://www.gnu.org/gnu/manifesto.html) that “Many programmers are unhappy about the commercialization of system software. It may enable them to make more money, but it requires them to feel in conflict with other programmers in general rather than feel as comrades.”
Here, Stallman laid out one of the visions thats continued driving open source to this day: “Once GNU is written, everyone will be able to obtain good system software free, just like air.”
Over the following decades, open-source developers and maintainers used the nascent and eventually mature Internet to build software projects that were hobbies, industry-supporting keystones, and everything in between.
[![cartoon of abstract machine, with unsteady building blocks supporting it.](./image1.png)](https://xkcd.com/2347/)
Amongst this growth, another economy surfaced, too: a huge crop of companies that built tooling and platforms for open source as well as a variety of business models, such as open core, to support open source. Open source, once primarily adversarial to private industry, has become integral to it.
### Is software content?
Open-source developers were creating content and distributing it on the Internet long before everyone else. The pioneering work of what we might now call the creator economy often goes unrecognized for three major reasons:
- Software isnt always seen as content in the same way video content and text content are.
- The original philosophy of open source emphasized community and collaboration emphasizing a movement that extended beyond any individual developer.
- Early open-source developers emphasized a “[gift culture](http://www.catb.org/~esr/writings/cathedral-bazaar/homesteading/index.html),” with people like Eric Raymond arguing that software should be “freely shared.” Content creators, however, have long depended on centralized platforms like YouTube that often offer built-in monetization tools.
These distinctions, as significant as they might seem at first glance, are collapsing. Two decades after Raymonds _The Cathedral and the Bazaar_, Nadia Eghbal wrote _Working in Public,_ and in it, she notes: “Like any other creator, these developers create work that is intertwined with, and influenced by, their users, but its not collaborative in the way that we typically think of online communities. Rather than the users of forums or Facebook groups, GitHubs open source developers have more in common with solo creators on Twitter, Instagram, YouTube, or Twitch.”
Of all people, considering the open letter cited earlier, Bill Gates might have been the first to realize this, [writing in 1996](https://medium.com/@HeathEvans/content-is-king-essay-by-bill-gates-1996-df74552f80d9) that “When it comes to an interactive network such as the Internet, the definition of content becomes very wide. For example, computer software is a form of contentan extremely important one, and the one that for Microsoft will remain by far the most important.”
Open source led the way, but now, this pioneering work is curling back on itself and the future of open source requires recognizing its connection to the creator economy as a whole.
## 5 ways open source paved the way for content creators
Open source developers pioneered new ways of creating and distributing content on the Internet lessons that are worth re-contextualizing and re-learning for the sake of open source and for a new, larger understanding of the creator economy.
### 1. Misleading margins abound
One of the major reasons the creator economy took off as a target for venture capital is because content creation has zero margin in theory. Like software, these venture capitalists proposed, you could create once and reproduce freely forever.
Theoretically, a YouTube creator should be able to make a library of great videos and make ad money for as long as the videos remain online. Unless its covering breaking news, a great video should still be great in six months, two years, and five years. Create once. Profit forever.
This isnt how it works. On YouTube, views can plummet if you dont stay in peoples minds and if you dont keep on trend. YouTube creators are building a brand and benefit from uploading regularly even if it leads to creators like Tom Scott uploading a video every week for ten years without break.
Of course, investors could have learned this lesson sooner by looking at open source. A similar mistaken assumption applies: build the software once and distribute it forever. But, again, this isnt how it works.
As Eghbal writes, open source maintainers are “expected to maintain the code they published for as long as people use it. In some cases, this could be literally decades, unless the maintainer formally steps away from the project.”
![screenshot of Apache website's download page.](./image2.png)
[Apache](https://httpd.apache.org/), for example, launched in 1995, celebrated its 25th anniversary in 2020, and released its most recent version in 2023.
Software degrades over time (think of tech debt, integration issues, changing standards, etc.) in much the same way a YouTubers brand degrades over time. Both need maintenance just to persist, much less grow.
### 2. Firewalls require vigilance
In traditional journalism, the “firewall” (sometimes referred to as a separation between church and state) is a conceptual and logistical distinction between the editorial department and the advertising department. If the two were to mix, advertising needs would bias editorial goals and subscribers wouldnt trust the publisher that mixed them.
The same distinction extends to content creation and open source.
In content creation, the trust a creator has built with their audience is paramount, and maintaining the firewall between their content and their sponsors is essential.
On Instagram, for example, an influencer needs to be very clear about whether a given post is an ad or not. There are legal standards around this issue the [SEC charged Kim Kardashian](https://www.sec.gov/news/press-release/2022-183) a fine in 2022, for example, for not disclosing that the crypto company she was promoting had paid her but the bigger issue is maintaining audience trust.
Without trust, you cant influence.
These kinds of controversies are not novel for open source developers. Similar discussions arise when vendors offer to support or acquire an open source project and when an open source maintainer starts taking sponsorships.
Charity Majors, CEO and cofounder of Honeycomb, came up through open source and when she founded a for-profit company, the firewall singed her. “I came from open source,” she writes in a [2023 post](https://charity.wtf/2023/03/29/questionable-advice-people-used-to-take-me-seriously-then-i-became-a-software-vendor/), “where contempt for software vendors was apparently _de rigueur_.”
Back then, she writes, she and others assumed vendors were “liars” that would “say anything to get you to buy.” Majors eventually learned that vendors werent all bad, but her experience exemplifies how the separation between open source and vendors (as well as content creators and advertisers) can be fraught.
She now recommends vendors “lead with [their] bias” and says that she “discloses [her] own vested interest up front.” The boundaries can be crossed, either by projects seeking sponsorships or by developers seeking employment, but the boundary requires respect.
### 3. Audiences are a source of survival and stress
Influencers require a significant level of fame to achieve success: enough viewers to earn brand deals, enough fans to clamor outside makeup stores, enough listeners to sell out live shows.
But even though creators depend on their audiences, those same audiences can be a huge source of stress. A big audience can mean pressure and it can also sometimes mean a [public pillorying](https://www.distractify.com/p/influencers-canceled-quickly).
Open source developers rarely have fans in the same way but they frequently run into a similar dynamic. As an open source project becomes popular, more people want to contribute but because contributions are rarely perfect, PR review can become a job unto itself.
Nolan Lawson, for example, a major contributor to PouchDB told Eghbal that open source popularity can create “a perverse effect where, the more successful you are, the more you get punished with GitHub notifications.”
[![screenshot showing number of notifications at 2,495, from blog page of Anthony Fu writing about how he manages GitHub notifications.](./image3.png)](https://antfu.me/posts/manage-github-notifcations-2023)
As [Alex Danco writes](https://danco.substack.com/p/making-is-show-business-now), “Success brings attention, interaction, and maintenance - both of the code itself, and of the creators reputation. This all takes work, and its often not the kind of work the creators like doing.”
Success can then breed disillusionment and sometimes burnout. Many early open source proponents imagined free-flowing collaboration sustaining the movement, but many maintainers arent finding as much collaboration as theyd like or need. As Eghbal writes, “Its not the excessive consumption of code but the excessive participation from users vying for a maintainers attention that has made the work untenable for maintainers today.”
Both open source developers and content creators can suffer from success.
### 4. Sustainability vs. selling out
Open-source maintainers faced the issue of “selling out” long before content creators faced it. And yet, ironically, even current open source developers struggle with monetization more than content creators do.
The modern, if limited, definition of the creator economy arose after numerous important creator platforms were established (YouTube, Instagram, etc.). With YouTube, especially, monetization was eventually built in. The highest-earning creators tend to seek partnerships but advertising money flows through the platform itself.
In open source, the original culture has proven much more resistant to monetization. Raymond emphasized an abundance mindset and a gift culture, fostering a perspective that sometimes prioritizes the movement above any individual maintainers sustainability.
But things might be changing. When Majors worked at Facebook, for example, she realized that “Open source successes like Apache, Haproxy, Nginx, etc. are exceptions, not the norm; that this model is only viable for certain types of general-purpose infrastructure software… If steady progress is being made, at the end of the day, somewhere somebody is probably paying those developers.”
On the other side of these success stories are open source developers working for little recompense. Alex Clark, for example, maintains Pillow, an open source project that has been downloaded millions of times and has even been used by NASA in its Mars Ingenuity helicopter.
But the income didnt [keep up with the influence](https://www.techtarget.com/searchitoperations/feature/Who-profits-from-open-source-maintainers-work). “Our income is disproportionate if this thing is everywhere across the entire globe, used by Fortune-whatever companies,” Clark said. “It's disproportionate. And there's no easy way to fix that."
This isnt an isolated feeling. According to [2023 Tidelift research](https://4008838.fs1.hubspotusercontent-na1.net/hubfs/4008838/Tidelift-2023-open-source-maintainer-survey.pdf), 77% of the maintainers who are not paid would prefer to get paid, 22% have quit open source, and 36% have considered quitting.
[![graphic depicted poll showing that 77% of unpaid maintainers of open source projects would prefer to be paid.](./image4.png)](https://4008838.fs1.hubspotusercontent-na1.net/hubfs/4008838/Tidelift-2023-open-source-maintainer-survey.pdf)
Open source developers learned the hard way that monetization is hard even if influence is indisputable.
### 5. Building despite the bus factor
Mr. Beast, the YouTube channel, has gotten over 42 billion views across nearly 800 videos and [employs about 250 people](https://www.businessinsider.com/whats-it-like-to-work-for-mrbeast-biggest-youtuber-world-2023-11#:~:text=MrBeast%20is%20likely%20the%20most,June%202023%2C%20according%20to%20Forbes.).
But if Mr. Beast, the person, were hit by a bus tomorrow, a channel that routinely earns hundreds of millions of views per video would likely plummet in popularity. Its a grim example of the [bus factor](https://en.wikipedia.org/wiki/Bus_factor) the idea that companies with employees who have centralized knowledge or power create immense risk for the companies as a whole.
Few open source maintainers have anything nearing the celebrity status of Mr. Beast and few open source projects could even really be considered personality-driven. And yet, many open source projects would suffer a similar fate from a similar bus factor.
Tidelift research shows that nearly half of all open source maintainers work alone; [Synopsis research](https://thenewstack.io/open-source-needs-maintainers-but-how-can-they-get-paid/) shows that 91% of codebases contained open source software that had had no developer activity in the past two years; and [Linux Foundation research](https://thenewstack.io/open-source-needs-maintainers-but-how-can-they-get-paid/) found that only 35% of projects had a strong new contributor pipeline.
In other words, the bus factor is alive and well in open source. If anything, the differences between open source and content creation make the result of this dynamic relatively worse for open source.
If Mr. Beast retires, every one of his fans will know; if a key open source maintainer retires, their project could continue on, zombie-like, until a security issue reveals everyone was depending on a project with no one at the helm.
## The bazaar will outlast the creator economy
Open source developers are frequently undervalued but between Raymond and Eghbal, as well as some lessons from traditional content creators, we can see a path toward greater recognition.
Raymond writes that in open source, “the only available measure of competitive success is reputation among one's peers,” but reputation is not automatically granted upon merging code.
Eghbal clarifies, writing that “Open source developers are chronically undervalued because, unlike other creators, theyre tied to a platform that doesnt enable them to realize the value of their work. Instead of operating quietly in the background, open source developers ought to come to the forefront again.”
More and more open source developers are coming to the foreground, including [Cassidy Williams](https://cassidoo.co/), who has a strong Twitter and TikTok presence, and Shawn Wang (popularly known as @swyx) who runs an influential blog and advocates for devs [learning in public](https://www.swyx.io/learn-in-public).
As Danco writes, “Making technology seems like a world apart from entertainment and show business. But in this new world, making _is_ show business.”
The more that open source developers and the content creators that came up after them can learn from each other, the more sustainable the whole creator economy will be.
As always, we want to hear your thoughts. Reach out to us via email at [hello@goauthentik.io](mailto:hello@goauthentik.io) or on [Discord](https://discord.com/channels/809154715984199690/809154716507963434)!

View File

@ -1,99 +0,0 @@
---
title: "Happy New Year from Authentik Security"
slug: 2024-1-12-happy-new-year-from-authentik-security
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- authentik
- happy new year
- new features
- year in review
- SSO
- SaaS
- SCIM
- RADIUS
- remote access
- RBAC
- identity provider
- authentication
- Authentik Security
hide_table_of_contents: false
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
A hearty Happy New Year to you all, from all of of us at Authentik Security, with sincere wishes that your 2024 may be filled with a maximum of joys (new features and elegant code) and a minimum of pains (bugs and the dreadful reality of not-enough-time).
> The start of a new year makes me want to first say **thank you** for the past year.
## Thank you!
**Thank you to our community**, from the newly joined members to our long-time friends and moderators and holders-of-knowledge. Without you all, well… we literally wouldnt be here. No matter how deep your knowledge of authentik is, its really your willingness to explore and test and give feedback on new and old features, all while supporting each other and staying in touch with good humor and vibes, that make us such a vibrant community.
**Thank you to our users**, from those who run authentik in their homelabs to those who run authentik in production, and everyone in between. We appreciate your trust and guidance, and your input into how we can provide the most-needed features and grow our product in the ways that solve your business needs and challenges.
**And of course thanks to our small team** here at Authentik Security, who joined me on this adventure and brought your skills and talents, your experience and passions, and your dedication to our product and users. We built a lot together last year, and this year has a rock-star list of features and functionality coming up!
## Accomplishments in 2023
Looking back to the work we did in 2023, the new features are just a part of the overall achievements and celebrations (and challenges) of building a new [company](https://goauthentik.io/blog/2022-11-02-the-next-step-for-authentik), growing the team, celebrating our [1st year](https://goauthentik.io/blog/2023-11-1-happy-birthday-to-us), and [defining our tools and processes](https://goauthentik.io/blog/2023-12-21-five-lessons-from-choosing-infrastructure-tooling). But we released quite a few new features that Im proud to share.
### RBAC
[RBAC](https://goauthentik.io/docs/user-group-role/access-control/) (role-based access control) is the gold standard of access control. RBAC provides the ability to finely configure permissions within authentik. These permissions can be used to delegate different tasks, such as user management, application creation and more to users without granting them full superuser permissions. authentik has had internal RBAC for a long time (and of course the policy engine for restricting access to applications), however access to different objects within authentik (like Users, Groups, etc) was not possible previously.
### Enterprise Support
Providing dedicated support with a proper ticketing system was a big accomplishment for 2023. Support was the flagship feature of our [Enterprise release](https://goauthentik.io/blog/2023-08-31-announcing-the-authentik-enterprise-release) in the fall of 2023.
### SCIM support
Our [2023.3 release](https://goauthentik.io/docs/releases/2023.3) added support for SCIM (System for Cross-domain Identity Management) protocol, allowing for the provision of users into other IT systems, where the provider synchronizes Users, Groups and the user membership.
### RADIUS Support
The [RADIUS protocol](https://goauthentik.io/docs/providers/radius/) for authentication allows for the integration of a wider variety of systems such as VPN software, network switches/routers, and others. The RADIUS provider also uses a flow to authenticate users, and supports the same stages as the [LDAP Provider](https://goauthentik.io/docs/providers/ldap/).
## Whats coming up in 2024?
Looking forward to new functionality for the new year, let me share some of the ones I am most excited about. As with any small development team, we tackle what we can, with an eye on which features will be most beneficial for you all, which have dependencies upon later features, maintainability as we further develop the feature, and how to best get them all out the door fully tested and documented.
### Wizardry
The task of adding the applications that you want authentik to authenticate is about to get a lot easier; we have a new wizard that combines the process of defining a new provider and a new application into one single task. This new wizard saves many steps and streamlines the process. Look for it in preview mode in our current 2023.10 release (navigate to the Applications page in the Admin UI), and let us know your thoughts. We will continue tweaking it, specifically the multi-select functionality, but feedback is always welcome!
![](./new-app-wizard.png)
### Remote Access Control (RAC)
With [RAC](https://goauthentik.io/docs/providers/rac/), in preview now with a full release in early 2024, authentik Admins are able to access remote Windows/macOS/Linux machines via [RDP](https://en.wikipedia.org/wiki/Remote_Desktop_Protocol)/[SSH](https://en.wikipedia.org/wiki/Secure_Shell)/[VNC](https://en.wikipedia.org/wiki/Virtual_Network_Computing). The preview version already has capabilities for using a bi-directoinal clipboard between the authentik client and the remote machine, audio redirection (meaning you can hear audio from the remote machine on your local instance), and resizing of the window you view of the remote machine.
### Mobile authenticator app for authentik
Soon you will be able to download our new authentik authentication app from Apple Store, and a bit further into 2024, from Google Play Store. This app can be used for 2FA/MFA verification when authentik users log in to authentik or access any application managed by an authentik instance. The first release of this app will use number-matching as the default verification process; users will view their authentik authenticator app on their phone, be prompted with a set of three numbers, and then need to select the same number that is displayed on their authentik instance login panel.
### Building out our SaaS offering
One of our most exciting, and definitely our biggest, projects for 2024 will be developing our SaaS offering, the hosted, fully-managed Enterprise Cloud. The Enterprise Cloud plan will provide the convenience of our enterprise-level product as a SaaS offering, hosted and managed by Authentik Security. For many organizations, the benefits of decreased operational costs and universal data access (no VPN, servers, and network configuration required) make SaaS the best choice. With the cloud offering, the same enterprise-level support plan is included, and migrating to self-hosted is always an option.
### DX and UX and quality-of-life improvements
As we mentioned in our blog about our one-year anniversary, we also plan to spend some time focused on user experience.
- Increase our focus on UX and ease-of-use, templatizing as much as possible of the frontend components, and developing a UI Style Guide
- A redesigned website, with more information about our solutions, use cases, and offerings
- New structure for our technical documentation; leveraging information architecture and user research to make it easier to find what you are looking for in our docs
- Defining even more robust tests and checks for our CI/CD pipeline and build process
- Stronger integration and migration testing, both automated and manual
- Spending more time on outreach and user research to learn what you all want
### Yes, a big year ahead
As most of us in the software and technology space know, the hard work of building new features and growing a company is, well, actually kind of fun. Challenging, yes, but always rewarding.
Wed love to hear from you all about our upcoming plans; reach out to us with an email to [hello@goauthentik.io](mailto:hello@goauthentik.io) or on [Discord](https://discord.com/channels/809154715984199690/809154716507963434).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

View File

@ -1,120 +0,0 @@
---
title: "While youre busy fixing vulnerabilities, someone is phishing your employees"
slug: 2024-1-18-while-youre-busy-fixing-vulnerabilities
authors:
- name: Jens Langhammer
title: CTO at Authentik Security Inc
url: https://github.com/BeryJu
image_url: https://github.com/BeryJu.png
tags:
- authentik
- access management
- phishing
- SCA
- vulnerabilities
- patches
- security hygiene
- security policy
- identity provider
- authentication
- ISO/IEC 27001
- SOC II
- Authentik Security
hide_table_of_contents: false
image: ./security-hygiene4.png
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
Last year we shared [our (mostly free and open source) security stack](https://goauthentik.io/blog/2023-11-22-how-we-saved-over-100k), including tooling we use for basic security coverage like visibility, dependency management, penetration testing, and more. Even with these tools set up, there are still activities and practices you need to do routinely and proactively to ensure youre not at risk.
There are frameworks you can look to (e.g. [NIST](https://www.nist.gov/cyberframework), [OWASP SAMM](https://owasp.org/www-project-samm/)) but these can be overwhelming if youre a one-person team or new to security. If youre coming into 2024 with a fresh resolve to improve your security posture, heres our advice on what to prioritize (and where you can automate).
![](./security-hygiene4.png)
<!--truncate-->
## The biggest security risk is poor access management, not vulnerabilities
[![](./phish1.png)](https://www.reddit.com/r/cybersecurity/comments/12ygfnw/comment/jhok5tz/)
When was the last time you heard of a major breach where they actually exploited the companys application to gain access? You are far more likely to be an unlucky victim of phishing, social engineering, stolen credentials, or insider threats than you are a targeted attack on your application.
It takes a lot of effort for hackers to study your app and infrastructure to find a way in. These types of attacks also dont necessarily mean hackers gain access to all your data, which is what theyre usually after. Attacks that simply take you down are just not valuable for them.
> Most of the major security breaches in recent years have been [a result of compromised access](https://goauthentik.io/blog/2023-10-23-another-okta-breach).
The UK governments [Cyber security breaches survey 2023](https://www.gov.uk/government/statistics/cyber-security-breaches-survey-2023/cyber-security-breaches-survey-2023) found that the percentage of businesses and charities with some basic cyber security hygiene practices actually _declined_ between 2021 and 2023:
| Security practice | 2021 | 2023 |
| ------------------------ | ---- | ---- |
| Use of password policies | 79% | 70% |
| Restricting admin rights | 75% | 67% |
### At a minimum, resolve to review access this year
Access management is number one—both in terms of things that get neglected, and a relatively simple thing to get on top of that can significantly reduce the impact of a breach.
There are two nightmare access scenarios for a security engineer:
- **Everybody has admin status** ([Sysdig reports](https://sysdig.com/2023-cloud-native-security-and-usage-report/) that 90% of accounts have excessive permissions!)
- **Passwords are stored in plain text somewhere** (probably to enable shared logins)
These practices are risky for multiple reasons. If everyone has admin permissions, suddenly a hacker only needs one compromised account to gain access to everything and do a lot of damage. And we all agree that storing passwords in a spreadsheet is not secure, and makes it hard to isolate who has access to what, and why.
Big, mature corporations tend to have more strict permissions. One of the blessings/curses about working in startups is that they are usually more flexible—employees have fewer limitations on the scope of their role, which can be great for collaboration and people taking initiative. But that fluidity can make it harder to keep tabs on who should have access to what. Being overly permissive with admin status also opens the door to [shadow IT](https://en.wikipedia.org/wiki/Shadow_IT).
[![](./phish2.png)](https://www.reddit.com/r/cybersecurity/comments/12ygfnw/comment/jhnqgnt/)
### Integrate access reviews into planning cycles
If your company is making quarterly or annual plans at this moment, now is a great time to introduce access reviews into that process. The second worst time to introduce new access policies and software is in the middle of a sprint when doing so disrupts a development team's cadence. (The worst time, of course, is after you've been breached.)
If you're partnering with engineering up front it's much easier to work with them to understand their needs. This will help you keep an eye open for unusual requests or network activity. Slotting access management reviews into an existing process helps to prevent the “security as blocker” problem (which often leads to friction between security and dev teams).
### Set up an Identity and Access Management solution
You can automate away some of the more routine/tedious aspects of Identity and Access Management (IAM) by implementing an identity provider. With SSO (Single Sign-on), you can map specific tasks to certain roles, and then further map refined-access roles to other roles, based on what tools a person is likely to need in their specific position at the company. For example, if someone has access to AWS, they automatically have access to your database. By using an SSO, especially one with [RBAC](https://goauthentik.io/docs/user-group-role/access-control) (Role-Based Access Control), you are only concerned with the initial setup being correct versus constant, ongoing reviews.
You can also set up alerts: these will send an email to the security team in the event that a new role is created so you can go and investigate.
Full disclosure: we make (and use in-house) [authentik](https://goauthentik.io/), which is an identity provider. There is an open source version, so even if youre strapped for cash you can still implement IAM. You can [check if authentik integrates with your application here](https://goauthentik.io/integrations/).
## What else should you be doing (at least) quarterly?
While were on the topic of security hygiene, there are other practices that are worth doing routinely (especially if youve automated your access management with an identity provider).
If youre on a path to a certification like SOC II or ISO/IEC 27001, you will already be implementing these on a quarterly basis. But as weve discussed on this blog before, [even if youre not seeking certification, theyre still worth doing](https://goauthentik.io/blog/2023-11-22-how-we-saved-over-100k#do-you-really-need-certifications) to ingrain good practices that become more important as you scale. As the Redditor suggested above, [security then goes from reactionary to policy making](https://www.reddit.com/r/cybersecurity/comments/12ygfnw/comment/jhnqgnt/).
- **Review and rotate long-lived credentials:** What was the last time AWS access keys were rotated? Rotating credentials every 30/60/90 days contributes to [“defense in depth”, which we touched on before here](https://goauthentik.io/blog/2023-11-22-how-we-saved-over-100k#organizational-security)). That way, if an access key is compromised, its no longer valid. Ideally, you would automate rotation so you can set and forget about this one.
- **Identify gaps in coverage:** A lot of these routine activities have a second-order effect of exposing failures in process. Youre kicking over stones and seeing whats lurking underneath. Anything you find is signal that theres a gap in coverage. How do you make sure you catch those long-lived tokens without having to go looking for them? Apart from automating rotation of credentials as suggested above, with AWS, for example, you could write a simple Python script to query IAM and send out an email when access tokens exceed a certain threshold.
### Designate ownership
If youre a small company and dont have dedicated security professionals to “own” the security of your apps and services, share the responsibilities. For example, one team member owns one application hosted on these five servers, and is responsible for them:
- They perform the routine hygiene checks (like access reviews)
- They also have the context necessary in the event of a critical vulnerability or even an outage (beyond security, you dont want people having to run around trying to figure out who does what or how to fix things in an emergency)
- They're responsible for writing the runbook and updating its procedures after an outage or vulnerability has been identified and managed
## PS: Plugging vulnerabilities—should you bother?
“_Oh no, this SCA tool says I have 453 vulnerabilities!_”
If your company is just building up your security posture, its easy to get distracted by seemingly alarming reports from SCA (Software Composition Analysis) tools. If you dont have dedicated security engineers and security is being shared among your development team, coding is probably much more familiar and attractive than taking on organizational security challenges. But, as mentioned above, breaches are far more likely to result from phishing than a vulnerability exploitation.
SCA tools notoriously produce a lot of [false positives](https://goauthentik.io/blog/2023-11-30-automated-security-versus-the-security-mindset#some-of-the-drawbacks-of-vulnerability-scanning-tools). This is not to say you should ignore vulnerabilities, but if youre strapped for time or resources, getting on top of access is far more likely to have an impact than painstakingly fixing every vulnerable piece of code.
### Patching is a high-impact measure that you _can_ implement routinely
Policies to apply software security updates within 14 days was another measure that suffered a decline in the past few years, according to the UK governments [Cyber security breaches survey 2023](https://www.gov.uk/government/statistics/cyber-security-breaches-survey-2023/cyber-security-breaches-survey-2023). Businesses and charities with this policy in place dropped from 43% in 2021 to just 31% in 2023.
The risk of falling behind on security patches is far greater than leaving a potential vulnerability unmitigated. If an attacker gains access to your server and your machines are out of date, they might be able to move from server to server. Now you have to go to your infrastructure team and figure out how to patch the server without taking it down. Instead, its wise to automate security patches, and you can use tools like [Chef](https://www.chef.io/products/chef-infra) or [Puppet](https://www.puppet.com/) to do so.
Wed be interested to hear what are the worst security hygiene practices that you have ever witnessed? What are some of your favorite good practices that you use, that we might not have mentioned above? Leave us a comment below, or connect with us via email at [hello@goauthentik.io](mailto:hello@goauthentik.io) or on [Discord](https://discord.com/channels/809154715984199690/809154716507963434)!
_[Rebecca Dodd](https://thebasementoffice.co.uk/) contributed to this post._
---

Binary file not shown.

Before

Width:  |  Height:  |  Size: 310 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 326 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 350 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 168 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

View File

@ -1,120 +0,0 @@
---
title: Dont hardcode your secrets in Kubernetes manifests, and other container pitfalls to avoid
slug: 2024-1-31-dont-hardcode-your-secrets
authors:
- name: Marc Schmitt
title: Infrastructure Engineer at Authentik Security Inc
url: https://github.com/rissson
image_url: https://github.com/rissson.png
tags:
- authentik
- access management
- Docker
- Kubernetes
- containers
- YAML
- configuration
- identity provider
- authentication
- Authentik Security
hide_table_of_contents: false
image: ./container.png
---
> **_authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta, Active Directory, and auth0. Authentik Security is a [public benefit company](https://github.com/OpenCoreVentures/ocv-public-benefit-company/blob/main/ocv-public-benefit-company-charter.md) building on top of the open source project._**
---
At the time of writing this post, the downfalls of using YAML as a templating language are being debated on [Hacker News](https://news.ycombinator.com/item?id=39101828). The headache of trying to customize Helm charts is a gripe we share at Authentik, which well get into below.
This post is about how we test and deploy [authentik](https://goauthentik.io/) using [containers](https://goauthentik.io/blog/2023/10/26/you-might-be-doing-containers-wrong/item), some gotchas weve encountered, and lessons weve picked up along the way.
When the [company founder](https://goauthentik.io/blog/2022-11-02-the-next-step-for-authentik) is from an Infrastructure background, and the first person he decides to hire (that's me!) is also from Infra, you can imagine that we end up with some pretty strong opinions about tools and processes.
This is part of an ongoing series about the tools and products that make up authentiks stack (you can also read about our [infrastructure tooling choices](https://goauthentik.io/blog/2023-12-21-five-lessons-from-choosing-infrastructure-tooling) and whats in our [security stack](https://goauthentik.io/blog/2023-11-22-how-we-saved-over-100k)).[](https://goauthentik.io/blog/2023-11-22-how-we-saved-over-100k)
![](./container.png)
<!--truncate-->
## How we use containers at authentik
### For development
For developing authentik on local machines, we use Docker for running external tools, like running the database in development and running Redis. Other tools we use are distributed using Docker images, like generating the frontend API client and other API clients. So, we use Docker for various purposes on local development machines or in CI.
### For deployment
To actually deploy authentik for our own, internal instance (Yes, we use authentik for all of our own apps here at the company, aka the proverbial dogfooding), we use Kubernetes. Users can choose either Docker Compose or Kubernetes for their authentik instances. Providing a Docker Compose file and container image plus a Helm chart for Kubernetes as part of the regular release package is becoming more and more standard, especially with new tools, so it made sense for us to follow suit. (The same applies for running Kubernetes internally—its what basically everyone in the industry is switching to.)
While customers dont _need_ the Helm chart to be able to deploy on Kubernetes (anyone could just take the container image we provide, look at the Docker Compose and adapt it to use on Kubernetes), its not a big lift for us to provide it, to eliminate extra steps for people wanting to use Kubernetes. These arent lengthy processes and they dont take much to maintain if set up correctly to begin with.
While writing Docker Compose files is pretty straightforward, the Helm chart can be tricky for developers who dont have exposure to operations and infrastructure if theres no dedicated infrastructure engineer on the team. So you may need an infrastructure engineer or at least a developer who runs their own homelab or is at least a bit interested in infrastructure, so theyre willing to spend the time learning how to do these things.
All of this is to say that we are not doing anything fancy here (in keeping with our approach of [optimizing for stability](https://goauthentik.io/blog/2023-12-21-five-lessons-from-choosing-infrastructure-tooling#5-optimize-for-stability-and-support)), but even though these are common paths, there are some pitfalls to watch out for…
## Dont hardcode your secrets
There are a number of tools out there that offer container images, Kubernetes manifests, or Helm charts as ways of setting up their services. With some tools, you have to watch out for sensitive information inadvertently getting exposed, because their proposed setup doesnt integrate well with GitOps philosophy nor with SecOps best practices.
For example, we mentioned in a previous post that [we use Wazuh for Security Information and Event Management](https://goauthentik.io/blog/2023-11-22-how-we-saved-over-100k#visibility-do-you-know-what-is-happening-in-your-environment). Wazuh is open source, well supported, and serves us well, but as mentioned in our previous post, it… takes some skill to deploy.
Instead of giving you a Helm chart that does everything nicely and automatically, you have to clone their repository, edit the manifest by hand, and then apply those changes manually on your Kubernetes cluster. Now heres the tricky part, when youre moving fast; if you hardcode those secrets into those manifests (which are just YAML files, so hardly secure), and then push them to Git (because youre practicing GitOps), now your secrets are exposed.
What you want to do instead is have your secrets stored in a secret storage solution (e.g. [Vault](https://www.vaultproject.io/)) and then in your manifests instruct your Kubernetes cluster to go look for those secrets in Vault. That way youre only exposing where the secret can be retrieved from, not exposing the secret in the manifest. You still have most of the advantage of the GitOps philosophy while preserving your security.
This pattern isnt unique to Wazuh, plenty of software has this challenge. So its definitely worth taking care when youre deploying a service and making sure that even if theyre not approaching things with a GitOps mindset, **you** are.
## Configurability & customization can bite you
For better or worse, Helm is widely used in the industry. It definitely has some annoying quirks. One of those quirks can lead to another common pitfall: not making your Helm charts configurable enough.
This is something to watch out for both as a _user_ of Helm charts (if youre installing services for your company), as well as a *provider*  (if you offer a Helm chart for customers). The manifests that you apply to Kubernetes are, as you know, YAML files. Helm, being a templating tool, enables you to template out YAML from whatever data you provide at the time you install the Helm charts.
By default, any data, any variable that you have hardcoded in the template, is impossible to override later, so things can get messy quick...
[![Quote: "Your config becoming more and more complex until it inevitably needs its own config, etc. You wind up with a sprawling, Byzantine mess."
We're already there with Helm. People write YAML because it's "just data". Then they want to package it up so they put it in a helm chart. Then they add variable substitution so that the name of resources can be configured by the chart user. Then they want to do some control flow or repetitiveness, so they use ifs and loops in templates. Then it needs configuring, so they add a values.yaml configuration file to configure the YAML templating engine's behaviour. Then it gets complicated so they define helper functions in the templating language, which are saved in another template file. So we have a YAML program being configured by a YAML configuration file, with functions written in a limited templating language. But that's sometimes not enough, so sometimes variables are also defined in the values.yaml and referenced elsewhere in the values.yaml with templating. This then gets passed to the templating system, which then evaluates that template-within-a-template, to produce YAML.](./HN-quote.png)](https://news.ycombinator.com/item?id=39102395)
If you want to modify the manifests generated by the Helm chart templates, theres no integrated way to do that in Helm, so you need to use [Kustomize](https://kustomize.io/) to override what has been generated from the Helm chart. You can also fork the Helm chart and modify the templates directly, but this is a cumbersome workaround and occasionally you might find licensing restrictions associated with doing this.
I cant count the number of PRs Ive opened to open source projects to fix their Helm charts because I couldnt override something I wanted to. It could be as simple as not supporting [IPv6](https://goauthentik.io/blog/2023-11-09-IPv6-addresses) and allowing people to replace `IPv4` with `IPv6`.
We actually just finalized some powerful customization options for our authentik chart (take a [sneak peek at the PR](https://github.com/goauthentik/helm/pull/230)), and will include it with the first 2024 release.
## Do review and update Kubernetes resources (but maybe not automatically)
As a refresher: there are two ways to control resources (RAM and CPU) in Kubernetes. These two settings provide levers for managing resources in your container:
1. **Requests:** Requests guarantee a set number of resources that a container can use. You can use this lever for scheduling if youre expecting an increase (e.g. for a launch).
2. **Limits:** This is the maximum threshold for resources that a container can use (so one container doesnt starve another service by using too much of your resources).
As you grow your user base, you will likely have more traffic to your services and more activity, consuming more resources. Your resource setup from 18 months ago might not be appropriate for your current scale, so proactively checking resource usage and allocation can help prevent future problems (downtime or [wasted resources](https://sysdig.com/blog/millions-wasted-kubernetes/)).
At authentik, I use [Robusta KRR](https://github.com/robusta-dev/krr) to produce routine reports on Kubernetes resources. I review these and then make manual updates to requests and limits as necessary. There are tools that automatically update the resources directly on the cluster (i.e. without updating them in Git), however, updating resources automatically has ripple effects: if youre increasing resources, you need more nodes to run the services, for example. In general, if automated changes OR manual changes are made, you want to be aware of it in case there are downstream effects that you need to adjust for.
> “There are some specific use cases where automatic adjustment to resources make sense, but otherwise its probably wisest to _automate the reporting, but manually conduct the reviews and updates_.”
## Consider setting up firewall rules at the outset
[Weve talked](https://goauthentik.io/blog/2023/10/26/you-might-be-doing-containers-wrong/item#why-you-should-use-one-container-per-service) ([a lot](https://goauthentik.io/blog/2023-12-21-five-lessons-from-choosing-infrastructure-tooling#2-build-with-scale-in-mind)) [on this blog](https://goauthentik.io/blog/2023-11-22-how-we-saved-over-100k#do-you-really-need-certifications) about setting things up the right way from the beginning. One instance where we actually didnt walk this talk was with firewalling within our Kubernetes cluster.
In general, we try to have best security practices for services that we deploy, like not running as root and not running privileged, which is by default not enabled. Were still missing the firewall piece though, so all our services can talk to each other, which in retrospect was a mistake.
This isnt a problem for now (and its not at all unusual), but we are working on [a SaaS version of authentik](https://goauthentik.io/blog/2024-1-12-happy-new-year-from-authentik-security#building-out-our-saas-offering), so there will be some customer code running on our infrastructure. Since we obviously dont want them to be able to make requests to our internal services, we need firewalls.
### We cant just turn off the tap
Currently all communication between services (apps, DBs, etc.) is allowed. As were trying to implement firewall rules, now were effectively saying nothing is allowed, and were just whitelisting things. Its much harder to do it this way around: when youre setting things up and saying nothing is allowed, things will fail if they need to communicate in some way. You get immediate feedback and then can go whitelist it.
The other way around, we cant just block everything and whitelist nothing, because well kill our services. We have to think very carefully about how were going to apply firewall rules and make sure everything is in place before we turn off the tap.
This is one area where we didnt anticipate our changing needs as we grow, and didnt set things up the right way from the outset. But then, we didnt have a SaaS offering to start with, so we didnt optimize for it. It just shows that you cant anticipate everything. Technically, we are able to apply those firewall rules only in the SaaS context and leave the existing setup as is, but while were building for SaaS, we might as well retrofit firewall rules everywhere, and be more secure in that way, even if it will cost some engineering time to take all those precautions.
I have a homelab setup where I test new infrastructure configurations, so I was able to experiment with firewalling before implementing it at authentik. If someone on your team has a homelab, this is a great way to validate configurations, experiment, and learn in the process. But if you dont have that as a resource, its still a good idea to have some kind of sandbox environment to iterate on your infrastructure before implementing it for real. That could be a Kubernetes cluster dedicated to sandboxing thats available to the whole infrastructure team, or running a small Kubernetes cluster on a local machine (using something like [kind](https://kind.sigs.k8s.io/) or [minikube](https://minikube.sigs.k8s.io/docs/)).
Let us know what practices you use, or any practices to avoid, and how you do your testing to find just the right balance of automation, GitOps, SecOps, and sanity. Reach out to us with an email to [hello@goauthentik.io](mailto:hello@goauthentik.io) or on [Discord](https://discord.com/channels/809154715984199690/809154716507963434); we look forward to hearing from you!
_[Rebecca Dodd](https://thebasementoffice.co.uk/) contributed to this post._
---

View File

@ -4,6 +4,6 @@ hide_table_of_contents: true
# API Browser
import APIBrowser from "../../src/components/APIBrowser";
import ApiDocMdx from "@theme/ApiDocMdx";
<APIBrowser />
<ApiDocMdx id="main" />

View File

@ -5,13 +5,10 @@ import type * as Preset from "@docusaurus/preset-classic";
module.exports = async function (): Promise<Config> {
const remarkGithub = (await import("remark-github")).default;
const defaultBuildUrl = (await import("remark-github")).defaultBuildUrl;
const footerEmail = await fs.readFile("src/footer.html", {
encoding: "utf-8",
});
return {
title: "authentik",
tagline: "Bring all of your authentication into a unified platform.",
url: "https://goauthentik.io",
url: "https://docs.goauthentik.io",
baseUrl: "/",
onBrokenLinks: "throw",
favicon: "img/icon.png",
@ -25,7 +22,11 @@ module.exports = async function (): Promise<Config> {
src: "img/icon_left_brand.svg",
},
items: [
{ to: "blog", label: "Blog", position: "left" },
{
to: "https://goauthentik.io/blog",
label: "Blog",
position: "left",
},
{
to: "docs/",
label: "Docs",
@ -42,7 +43,7 @@ module.exports = async function (): Promise<Config> {
position: "left",
},
{
to: "pricing/",
to: "https://goauthentik.io/pricing/",
label: "Pricing",
position: "left",
},
@ -61,68 +62,7 @@ module.exports = async function (): Promise<Config> {
],
},
footer: {
links: [
{
title: "Subscribe to authentik News",
items: [
{
html: footerEmail,
},
],
},
{
title: "Documentation",
items: [
{
label: "Documentation",
to: "docs/",
},
{
label: "Integrations",
to: "integrations/",
},
{
label: "Developer Documentation",
to: "developer-docs/",
},
{
label: "Installations",
to: "docs/installation/",
},
],
},
{
title: "More",
items: [
{
to: "jobs/",
label: "Jobs",
position: "left",
},
{
label: "GitHub",
href: "https://github.com/goauthentik/authentik",
},
{
label: "Discord",
href: "https://goauthentik.io/discord",
},
],
},
{
title: "Legal",
items: [
{
to: "legal/terms",
label: "Terms & Conditions",
},
{
to: "legal/privacy-policy",
label: "Privacy policy",
},
],
},
],
links: [],
copyright: `Copyright © ${new Date().getFullYear()} Authentik Security Inc. Built with Docusaurus.`,
},
tableOfContents: {
@ -168,10 +108,6 @@ module.exports = async function (): Promise<Config> {
theme: {
customCss: require.resolve("./src/css/custom.css"),
},
gtag: {
trackingID: "G-9MVR9WZFZH",
anonymizeIP: true,
},
blog: {
showReadingTime: true,
blogSidebarTitle: "All our posts",
@ -179,6 +115,17 @@ module.exports = async function (): Promise<Config> {
},
} satisfies Preset.Options,
],
[
"redocusaurus",
{
specs: [
{
id: "main",
spec: "static/schema.yaml",
},
],
},
],
],
plugins: [
[
@ -208,15 +155,5 @@ module.exports = async function (): Promise<Config> {
mermaid: true,
},
themes: ["@docusaurus/theme-mermaid"],
scripts: [
{
src: "https://goauthentik.io/js/script.js",
async: true,
"data-domain": "goauthentik.io",
},
{
src: "https://boards.greenhouse.io/embed/job_board/js?for=authentiksecurity",
},
],
};
};

View File

@ -1,129 +0,0 @@
const config = require("./docusaurus.config");
import type { Config } from "@docusaurus/types";
module.exports = async function (): Promise<Config> {
const remarkGithub = (await import("remark-github")).default;
const defaultBuildUrl = (await import("remark-github")).defaultBuildUrl;
const mainConfig = await config();
return {
title: "authentik",
tagline: "Making authentication simple.",
url: "https://goauthentik.io",
baseUrl: "/if/help/",
onBrokenLinks: "ignore",
favicon: "img/icon.png",
organizationName: "BeryJu",
projectName: "authentik",
themeConfig: {
navbar: {
logo: {
alt: "authentik logo",
src: "img/icon_left_brand.svg",
},
items: [
{
to: "docs/",
activeBasePath: "docs",
label: "Docs",
position: "left",
},
{
to: "integrations/",
activeBasePath: "integrations",
label: "Integrations",
position: "left",
},
{
to: "developer-docs/",
activeBasePath: "developer-docs",
label: "Developer Docs",
position: "left",
},
{
href: "https://github.com/goauthentik/authentik",
label: "GitHub",
position: "right",
},
{
href: "https://goauthentik.io/discord",
label: "Discord",
position: "right",
},
],
},
footer: {
links: [],
copyright: mainConfig.themeConfig.footer.copyright,
},
colorMode: mainConfig.themeConfig.colorMode,
tableOfContents: mainConfig.themeConfig.tableOfContents,
prims: mainConfig.themeConfig.prism,
},
presets: [
[
"@docusaurus/preset-classic",
{
docs: {
id: "docs",
sidebarPath: require.resolve("./sidebars.js"),
editUrl:
"https://github.com/goauthentik/authentik/edit/main/website/",
remarkPlugins: [
[
remarkGithub,
{
repository: "goauthentik/authentik",
// Only replace issues and PR links
buildUrl: function (values) {
return values.type === "issue"
? defaultBuildUrl(values)
: false;
},
},
],
],
},
pages: false,
theme: {
customCss: require.resolve("./src/css/custom.css"),
},
},
],
],
plugins: [
[
"@docusaurus/plugin-content-docs",
{
id: "docsIntegrations",
path: "integrations",
routeBasePath: "integrations",
sidebarPath: require.resolve("./sidebarsIntegrations.js"),
editUrl:
"https://github.com/goauthentik/authentik/edit/main/website/",
},
],
[
"@docusaurus/plugin-content-docs",
{
id: "docsDevelopers",
path: "developer-docs",
routeBasePath: "developer-docs",
sidebarPath: require.resolve("./sidebarsDev.js"),
editUrl:
"https://github.com/goauthentik/authentik/edit/main/website/",
},
],
[
"@docusaurus/plugin-client-redirects",
{
redirects: [
{
to: "/docs/",
from: ["/"],
},
],
},
],
],
};
};

Some files were not shown because too many files have changed in this diff Show More