Shipping web components in 2020
April 21, 2020. 2,507 words and a 13 min read. Post version 1.2
Recently, we shipped Banked.js, a component library for integrating account-to-account payments into any web application. We’d want to share what that process looked like, from vanilla JavaScript to shipping a web-component: what we thought about, what we tried, and why. We also wanted to talk about what it’s like to build and ship a web component in 2020.
What is Banked.js and why did you create it?
The Banked platform enables anyone to take direct account-to-account payments. We provide several ways of doing this, ranging from the very easy to implement (using our hosted checkout) to the more involved (building your own custom checkout experience using our Checkout API).
When we spoke to our customers, we often heard the feedback that they wanted a middle ground. Specifically, they wanted a way of embedding our checkout inside their user experience (and therefore controlling it) but with no need to build it entirely themselves.
The core of Banked’s checkout is a screen where users select which bank they’d like to pay with. From here, they are redirected to their chosen bank’s authorisation flow and back to a callback URL provided by the merchant.
We wanted to provide a way of our customers easily integrating this UI and behaviour into their web applications. Embedding the UI widget with JavaScript was the obvious answer as many of our customers have experience with embedding payment widgets, but that comes with concerns for everybody.
The blessing and curse of being on other people’s pages
Embedded user experience
Embedding a UI into one you don’t control forces you to answer a few of questions:
- What’s the minimum UX/UI you can provide to deliver value?
- How (or should) our embed react to the UX and layout around it?
- Should implementors be able to customise our embed? If so, how much? Would allowing them to customise it affect the ability to provide a service? Or lead to ‘worse’ outcomes for their users?
After some deliberation, this is how we answered:
- We're just going to be embed the bank selection UI
- Yes, it should react (in a limited way) to the surrounding UI (being responsive to screen size/orientation), expanding to fill its parent container
- It should only allow customisation in a very controlled way
The customisation we offered was simple: you can use our reactive button or not. There's a lot of hard learned lessons and optimisations we've made to this screen from our own hosted checkout (e.g. how does it react to a bank not being available?) and customisation might mean a bad experience for end-users and a poor conversion experience for merchants; if they really want that level of control they can always implement our Checkout API).
So, why did we ship a button at all? And why do we recommend our customers use it by default?
Two reasons:
- We've learned that giving users more context for what will happen next (e.g. going to their mobile banking app) helps conversion, branding the button after you select your bank helps too
- The next step is redirecting users to their selected bank's authorisation URL. Unless this happens after a 'user sourced event', like a button click, many browsers will prevent the bank's app deep-link from opening. We learned this lesson the hard way and we want to avoid our customers needing to learn it too!
Being good citizens on our customers' pages
Page weight and performance is increasingly important for our merchant customers, not least because of the impact it has on conversion rates; we need to vociferously defend every byte we ship to them and every tick of the browser's rendering we use.
This led us to our Rules of the Game:
- Bundle size should be as small as humanly possible
- We should constantly track, measure, and optimise on-page performance
- If we break, we break gracefully and have as few side effects on the page as possible
We measure bundle size (1) through WebPack's performance API, erroring our build if we go over our pre-defined size limits, we also have bundle size optimisation as part of the 'definition of done' for tasks we work on with Banked.js. Measuring and optimisation (2) is achieved through vigorous testing and usage of the window.performance
browser API.
However, anyone who has built an embeddable UI knows breaking gracefully (3) is hard. Only recently has the CSS community started embracing scoping, without which styling clashes and side-effects from the parent page, and the embed itself, can have serious consequences. Beyond CSS, JavaScript's global mutable state and single threaded event loop can make small changes have unintended functional or performance implications.
How could we solve these issues? Use tooling to automatically scope our CSS declarations? Use WebWorkers to avoid on page performance impacts? Lint and statically analyse our code as much as possible to avoid common foot-guns? These are all encapsulation problems, and we eventually realised web components and their associated web APIs mitigate many of these issues.
Embedding an iframe could have helped us solve these issues but it would have also introduced a lot of others: working around CSP and frame busting protections on our customers' sites; ad and script blockers being increasingly aggressive with blocking iframes; and browser security protections limiting access to the top object from within the frame, preventing easy two-way integration with the embedding page.
Making implementors lives as easy as possible
An explicit goal for Banked.js was making it as easy to integrate and use as possible. When we first started thinking about this project, we considered directly building components for JavaScript frameworks (like React or Vue) but when we investigated we realised, A) adoption of these frameworks wasn't high enough amongst our customer base to justify it, and B) the cardinality of framework, versions, and tooling amongst those that had adopted it was high enough that it would take forever to get to significant coverage.
So we went down the path of being framework agnostic, exposing a simple enough API to integrate with any framework and version easily, ensuring a consistently straightforward implementation for our users.
Our design goal was for the API to be DOM based: you give us a tag on the page and a payment ID and we'll take care of everything else. Our implementors shouldn't have to care about order of precedence, loading, or asynchronicity unless they choose to. Web Components ended up adding huge value here, saving us a considerable amount of work building on page APIs (which we built ourselves in our first non Web Component version).
Web Components also gave us a lot of 'defensiveness' for free. We want to provide a reliable service to our customers, and sometimes that involves us protecting them from themselves; Web Component's encapsulation gives us a lot of that protection out of the box.
Version 1: Vanilla JavaScript and fighting the battle for encapsulation
The vision was simple: include a JavaScript snippet and give a DOM node a magic ID. Voila, you have your bank selection screen.
document.getElementByID('banked-provider-list').addEventListener('banked-provider-set', function (e) {
window.location.replace(e.detail.redirectUrl);
});
We thought this was simple, clean, easy to understand, and could be integrated easily into most tools and frameworks. You could then attach a DOM event listener to capture the custom event emitted by the component:
<head>
<title>Your Application</title>
</head>
<body>
<script src="https://js.banked.com/v1" data-api-key="YOUR_CLIENT_KEY" type="text/javascript"></script>
<banked-provider-list payment-id="PAYMENT_ID"></banked-provider-list>
<banked-pay-button></banked-pay-button>
</body>
We would handle all the mounting, API requests, and asynchronicity internally. Leaving very little work for the implementor.
It worked, but it felt brittle.
- Magic ID's felt easily broken (named access on the
window
object could have some unintended side effects, for example) and could be confusing to implement (did it have to be on adiv
element? Why not anarticle
?) - We had to write a lot of code to handle the order of precedence and rendering (e.g. what happens if the zdata-payment-idz isn't set until after the page has rendered?)
- Even if we namespaced all our CSS, any change to global elements (like form fields, links, buttons) would have serious consequences for our layout. Writing overly specific CSS targeting, littering our code with
!important
or inlining our CSS was hard to maintain and would lead to weird edge case performance and rendering issues - We had to write a disconcerting amount of JavaScript, and it all needed to run in the same event loop as the encapsulating page. It proved hard to do this defensively and in a way that we were confident wouldn't impact page performance
We also hadn't planned to deal with user sourced events being needed to not break the bank's redirect, but when we gave this version to the rest of the engineering team to play with, they quickly implemented this:
document.getElementByID('banked-provider-list').addEventListener('banked-provider-set', function (e) {
window.location.replace(e.detail.redirectUrl);
});
Which didn't work in the hard to diagnose and understand way described above. We realised this would be common for our external implementors too.
Version 2: Web Components and "Pfft, we don't need none of that tooling"
One of our engineers had a brainwave when thinking about the issues we'd run into, 'Web Components!'.
Web Components are now a mature and well supported set of web APIs. They seemed to be perfectly designed for our use case and the challenges we were running into (particularly the Shadow DOM). We quickly built a new version, using the raw APIs, and it looked great:
<head>
<title>Your Application</title>
</head>
<body>
<script src="https://js.banked.com/v1" data-api-key="YOUR_CLIENT_KEY" type="text/javascript"></script>
<banked-provider-list payment-id="PAYMENT_ID"></banked-provider-list>
<banked-pay-button></banked-pay-button>
</body>
(Notice we also added a second component, the button)
It provided most of the encapsulation we were after, handled the mounting and initialising of our component, and we had to write zero lines of code to do it. It also provided a much clearer, more semantic API for implementors to understand: no more magic strings and ambiguous DOM nodes.
It even provided nice ways of handling event emission and nicely integrated as a part of a form
element out of the box.
Results from our engineering team were positive, there were far fewer gotchas and heffalump traps when they created toy implementations. Most of them got it working in minutes!
A new challenge emerged. We'd built a successful thing, but the tooling necessary to make us happy with its integrity, quality, and safety eluded us. Tooling for JavaScript projects is not something you usually lack, so we were interested to find so few options for testing, linting, and building Web Components.
Before we had started on V2, we looked at Polymer and were pretty confused about its current status (parts of it are deprecated but are still used? Other projects under its banner appear to do similar things to the original Polymer library but not all of them?). It didn't inspire confidence, and we discarded it in favour of quickly getting something up and running.
This bears true for most of the Web Components ecosystem: it's confusing, buggy, and riddled with out-of-date docs and confusingly deprecated tools. A particularly annoying issue was the lack of support (or bugginess) of Web Components implementations in popular testing tools; the web component community's default fall back is saying, 'You need to use a full browser run-time,' (like Karma). Full support for non-browser headless/JS runtimes would have made this process, and our CI infrastructure, much simpler.
Version 3: Web Components and, "Turns out we do need that tooling"
During our search through the dark and murky corners of the Web Components community we came across Open-WC: a laudable and successful effort to combine various tools and frameworks into a usable, opinionated, and reliable toolchain for building web components.
It provides:
- Working (and sensible) linters (ESLint and Stylist) configured for working with Web Components
- A framework and tooling for development, which was otherwise difficult and fragile to maintain
- A suite of tools for testing (unit, integration and accessibility)
- Build tooling (for our choice of tool WebPack, but also Rollup) Deployment and demo tooling (through a pretty sweet Storybook integration)
We quickly moved Banked.js to use Open WC and haven't looked back. It's meant we could delete a huge amount of home-brewed tooling and the tradeoffs have been worth it.
It imposes a small bundle size penalty (mainly through its use of LitElement) but it was a small price worth paying for the development ergonomics and maintenance benefits. We've also changed its default build, and don't use the <script>
based ES modules it comes configured with.
So now we're left with a useful, safe, and reliable component any of our customers can use to embed account-to-account payments into their web app:
<head>
<title>Your Application</title>
</head>
<body>
<script src="https://js.banked.com/v1" data-api-key="YOUR_CLIENT_KEY" type="text/javascript"></script>
<banked-provider-list payment-id="PAYMENT_ID"></banked-provider-list>
<banked-pay-button></banked-pay-button>
</body>
Serving Banked.js
After we build Banked.js via Github Actions, we deploy it to Cloudflare's KV Store and serve it to end users via a Worker. CloudFlare workers are serverless functions that are distributed and run in Cloudflare's 200+ POPs.
We use workers (rather than Cloudflare's pull based CDN) because it enables us to do a few different things that just aren't possible (or if possible, not easy) with traditional CDNs, namely:
- We can serve a debug build if the request comes from a specified domain or with a certain cookie set
- We can serve different versions to different user agents if we want to dynamically include polyfills
- We can multivariate test new versions of the script without implementors needing to update their config
The example below is a worker function that serves a debug build if a cookie is present on the request (getting the JS from the KV store is omitted for brevity):
function getCookie(request, name) {
let result = null;
let cookieString = request.headers.get('Cookie');
if (cookieString) {
let cookies = cookieString.split(';')
cookies.forEach(cookie => {
let cookieName = cookie.split('=')[0].trim()
if (cookieName === name) {
let cookieVal = cookie.split('=')[1]
result = cookieVal
}
})
}
return result
}
async function handleRequest(request) {
let cookie = getCookie(request, '__BANKED-DEBUG')
if (cookie && cookie === 'TRUE') {
return Response('[banked-debug.js]')
}
return Response('[banked.js]')
}
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
The future of embedded checkout at Banked
We've ended up very happy with Banked.js: it provides our customers with a valuable, lightweight way of taking account-to-account payments and is easy and safe for us to iterate on and improve. Our aim is to open source Banked.js in the next few weeks.
We're also looking at how we can bring the same easy, safe integration experience to our customers' native applications. Watch this space!
Related posts
- 'Founding' technical roles are a signal to not join a business
An open technical role with 'Founding' in the title is bad sign, so try to avoid taking those roles
Published on April 29, 2024 in Tech
- CTO Primer: Technical Due Diligence 1.1
What happens during an investor technical due diligence? What are they looking for? How does it work?
Published on August 17, 2023 in Tech
- The role of regulators in the origin of multi-cloud
Is there any value in a multi-cloud strategy? Why does multi-cloud exist? Do I have to care about it?
Published on July 7, 2023 in Tech