Imagine you have a scrollable area that displays chronological content like a feed (but not like LinkedIn, Facebook or Twitter which are reverse-chronological), similar to a chat window. You want the region to be scrolled to the bottom to begin with, which is where the relevant content is.
You could do it with JavaScript, like this:
const region = document.querySelector('.my-region')
region.scrollTop = region.scrollHeight
But you know, using JavaScript means it can fail, or can take a while to happen, potentially at a point where the user has begun scrolling in that region. Not perfect.
Another approach is to do it with CSS. The idea is to use a reverse-column flex layout so the scroll begins bottom-anchored. Of course, this reverses the order of elements, so they need to be also reversed in the DOM so they are displayed in the right order (elements near the bottom need to appear first in the DOM order).
See the Pen Untitled by Kitty Giraudel (@KittyGiraudel) on CodePen.
One thing worth mentioning is that using column-reverse
creates a disconnect between the visual order and the DOM order, which can be confusing for screen-reader users. For someone using a screen-reader, the first element in the DOM is now the “latest” element from the feed, which I would argue is better this way since this is the new and relevant content, but it may not be the expected behavior.
Interestingly enough, there has been some movement in that area very recently (as in within the last month) and Blink is already working on the implementation. Some work is being done on a reading-order
CSS property (or perhaps reading-flow
, name appears to be pending) that would enable developers to help screen-readers figure out the best reading order.
reading-order: normal | flex-visual | flex-flow | grid-rows | grid-columns | grid-order
In our case, we could use reading-order: flex-visual
to align the way sighted users and screen-reader users consume our feed.
I feel like this post is a good opportunity to remind everyone that scrollable areas are not accessible by default and need some work.
For starters, they need a tabindex="0"
attribute so they can be focused and scrolled with the keyboard. This satisfies 2.1.1: Keyboard (Level A) and 2.1.3: Keyboard (No Exception) (Level AAA).
Additionally, they need an accessible name, either via aria-label
or aria-labelledby
mapped to a heading element for instance. And for their label to be applied, they need a non-presentational role, such as role="region"
.
Adrian Roselli has a great article about scrollable regions.
]]>Formerly, I used to have a lot of pure functions which accepted a bi-dimensional array, but I decided that using a Grid class would be better as it could keep track of its own data, and offer plenty methods to access and manipulate it.
To represent a pair of coordinates, we’ll use a Coords
type. It’s an alias for [number, number]
where the first value is the row index (ri
, or Y in a traditional coordinate system) and the second value the column index (ci
, or X).
Because JavaScript doesn’t have a native tuple type which can be used in maps, arrays and sets, we have to resort to using a string representation of our coordinates. This is our Point
type, which is ${number},${number}
.
Additionally, we have helpers toPoint
and toCoords
to convert a Coords
into a Point
and a Point
into a Coords
respectively.
type Coords = [number, number]
type Point = `${number},${number}`
const toPoint = (input: Coords) => input.join(',') as Point
const toCoords = (input: Point) => input.split(',').map(Number) as Coords
Throughout the code, I will use ri
(row index) and ci
(column index) in place of Y and X respectively. I tend to find the code easier to understand when thinking in rows and columns rather than using the X,Y coordinate system. Coincidently, I express coordinates as Y,X (ri,ci) since I bi-dimensional arrays are read row-first, and then column.
Our Grid class really is a wrapper around a bi-dimensional array. It can be instantiated with dimensions, and an optional setter which receives the row indew and the column index.
class Grid<T> {
private data: T[][]
constructor(
width: number,
height: number,
value: T | null | ((coords: Coords) => T) = null
) {
this.data = Array.from({ length: height }, (_, ri) =>
Array.from({ length: width }, (_, ci) =>
typeof value === 'function'
? (value as CallableFunction)([ri, ci])
: value
)
)
}
get rows() {
return this.data
}
get columns() {
return Array.from({ length: this.width }, (_, ci) =>
this.rows.map(row => row.at(ci) as T)
)
}
// More to come …
}
// Examples
const grid = new Grid(0) // 0x0 grid
const grid = new Grid(3) // 3x3 grid
const grid = new Grid(5, 3) // 3x5 grid
const grid = new Grid(5, 3, null) // 3x5 grid with `null` everywhere
const grid = new Grid(5, 3, (ri, ci) => {
// Initialize the cell at ri,ci to the return value from this functon
})
What I noticed with Advent of Code is that more often than not it is interesting to be able to instantiate a grid from an existing data structure; either a bi-dimensional array already, or an array of strings (where each string will be considered a row, with one column per character).
For these cases, I came up with 2 static methods which return a grid instance. There is a lot going on but it’s mostly TypeScript shenanigans. The from
method reads the width and height from the input and instantiate a grid with it. The fromRows
method uses the from
static method to instantiate a grid.
type Mapper<I, O> = (value: I, coords: Coords) => O
const identity = <I, O>(value: I, coords: Coords) => value as unknown as O
class Grid<T> {
// …
static from<I, O = I>(input: I[][], mapper: Mapper<I, O> = identity) {
return new Grid<O>(input[0].length, input.length, ([ri, ci]) =>
mapper(input[ri][ci], [ri, ci])
)
}
static fromRows<O = string>(
input: string[],
mapper: Mapper<string, O> = identity
) {
return Grid.from(
input.map(row => Array.from(row)),
mapper
)
}
}
// Examples
const grid = Grid.fromRows('123\n456\n789'.split('\n'), Number)
/**
* [ [ 1, 2, 3 ]
* [ 4, 5, 6 ]
* [ 7, 8, 9 ] ]
*/
Now that we have solid ways to instantiate grids, we can write getters to retrieve their dimensions. They are pretty straightforward:
class Grid<T> {
// …
get width() {
return this.data.length ? this.data[0].length : 0
}
get height() {
return this.data.length
}
}
// Examples
const grid = Grid.fromRows('12\n45\n78'.split('\n'), Number)
console.assert(grid.width === 2)
console.assert(grid.height === 3)
Then, we need a way to read the value stored at a set of coordinates.
class Grid<T> {
// …
get(position: Point | Coords) {
const [ri, ci] =
typeof position === 'string' ? toCoords(position) : position
return this.data?.[ri]?.[ci]
}
at(position: Point | Coords) {
return this.get(position)
}
}
// Examples
const grid = Grid.fromRows('123\n456\n789'.split('\n'), Number)
const topLeft = grid.at('0,0') // 1
const center = grid.at([1, 1]) // 5
const outOfBound = grid.at([10, 10]) // undefined
I decided not to throw an error when attempting to access a cell that’s out of bound. It would probably be safer to warn or throw, but in the scope of Advent of Code, there were plenty cases where we just want to return undefined
instead.
When setting a value though, we do want to make sure the coordinates exist in the grid. This is how it looks like:
class Grid<T> {
// …
set(position: Point | Coords, value: T) {
const [ri, ci] =
typeof position === 'string' ? toCoords(position) : position
if (ri < 0 || ri > this.height - 1) {
throw new Error(
`Cannot set value at position ${position} since row ${ri} is out of bound for grid of height ${this.height}.`
)
}
if (ci < 0 || ci > this.width - 1) {
throw new Error(
`Cannot set value at position ${position} since column ${ci} is out of bound for grid of width ${this.width}.`
)
}
this.data[ri][ci] = value
return this
}
}
// Examples
const grid = Grid.fromRows('123\n456\n789'.split('\n'), Number)
grid.set('0,0', 'A').set([1, 1], 'E')
grid.set([10, 10], 'Z') // Throws because out of bound
In most cases, we want to be able to iterate on our grid though. We’re going to implement most array methods like forEach
, map
, filter
, every
… Let’s start with forEach
. We’re going to make sure the function we pass to all these methods have a single signature so they’re easy to use. It should accept the current cell value (what’s actually stored in the grid cell), and its coordinates.
class Grid<T> {
// …
forEach(handler: (item: T, coords: Coords) => void) {
this.rows.forEach((row, ri) =>
row.forEach((value, ci) => handler(value, [ri, ci]))
)
}
}
// Examples
const grid = Grid.fromRows('123\n456\n789'.split('\n'), Number)
grid.forEach((value, coords) => {
console.log('Value at', coords, 'is', value)
})
Now, mapping. Mapping is a bit special because the goal is to modify the grid values by applying the given function onto them. Array.prototype.map
returns a new array though, so we probably should do the same. Grid.prototype.map
should return a new grid.
class Grid<T> {
// …
map<O>(handler: (item: T, coords: Coords) => O) {
const next = Grid.from(this.data) as Grid<O>
this.forEach((value, coords) => next.set(coords, handler(value, coords)))
return next
}
}
// Examples
const grid = Grid.fromRows('123\n456\n789'.split('\n'), Number)
const next = grid.map((value, coords) => value * value)
/**
* [ [ 1, 4, 9 ]
* [ 16, 25, 36 ]
* [ 49, 64, 81 ] ]
*/
Next, reducing the grid into a single value. It works the same way as Array.prototype.reduce
: it takes a reducer function which handles an accumulator value, and an initial value for the accumulator.
class Grid<T> {
// …
reduce<O>(handler: (acc: O, item: T, coords: Coords) => O, initialValue: O) {
return this.data.reduce(
(accRow, row, ri) =>
row.reduce(
(accCol, item, ci) => handler(accCol, item, [ri, ci]),
accRow
),
initialValue
)
}
}
// Examples
const grid = Grid.fromRows('123\n456\n789'.split('\n'), Number)
grid.reduce((total, value, coords) => total + value) // 45
We can use that new reduce
method to build another thing that can be handy: a function that finds the coordinates matching the given predicate. Call it findCoords
.
class Grid<T> {
// …
findCoords(predicate: (item: T, coords: Coords) => boolean) {
return this.reduce<Coords | undefined>(
(acc, item, coords) => acc ?? (predicate(item, coords) ? coords : acc),
undefined
)
}
}
// Examples
const grid = Grid.fromRows('123\n456\n789'.split('\n'), Number)
grid.findCoords(value => value === 7) // [2, 0]
Writing a find
method becomes very easy now that we have this one:
class Grid<T> {
// …
find(predicate: (item: T, coords: Coords) => boolean) {
const coords = this.findCoords(predicate)
return coords ? this.get(coords) : undefined
}
}
// Examples
const grid = Grid.fromRows('123\n456\n789'.split('\n'), Number)
grid.find((_, [ri, ci]) => ri === ci) // 1 (not super useful example)
Although there are certainly more methods we can write, let’s end the iterating section with filter
. Especially since it wasn’t the most straightforward to write (and there are more than one ways to author it). The idea is to remove all non-passing values from every row, then remove empty rows, then flatten all rows into a single array.
class Grid<T> {
// …
filter(predicate: (item: T, coords: Coords) => boolean) {
return this.rows
.map((row, ri) => row.filter((value, ci) => predicate(value, [ri, ci])))
.filter(row => row.length > 0)
.flat()
}
}
// Examples
const grid = Grid.fromRows('123\n456\n789'.split('\n'), Number)
grid.filter(value => value % 3 === 0) // [3, 6, 9]
I won’t go too deep into the next piece of code, mainly because I’ve written that a while ago and I’m not 100% sure on how it works — rotating matrices has never been my forte. Anyway, it provides a couple of methods to manipulate the data: rotate
, flip
, and variants
to get all the possible rotations/flips of the grid.
class Grid<T> {
// …
clone() {
return Grid.from(structuredClone(this.data))
}
rotate() {
const next = new Grid<T>(0)
this.columns.forEach((_, ci) => {
next.rows.push(this.rows.map(row => row[ci]).reverse())
})
return next
}
flip() {
const flipped = this.clone()
flipped.rows.reverse()
return flipped
}
variants() {
const variants: Grid<T>[] = []
const rotate = (rotations: number = 0) => {
let grid = this.clone()
for (let i = 0; i < rotations; i++) grid = grid.rotate()
return grid
}
for (let i = 0; i <= 3; i++) {
const rotated = rotate(i)
const flipped = rotated.flip()
variants.push(rotated)
variants.push(flipped)
}
return variants
}
}
It can be useful to log the grid for debugging purposes. We can write a little render
function that serializes the grid for console output:
class Grid<T> {
// …
render(
separator: string = '',
mapper: (value: T) => string = value => String(value)
) {
return this.rows.map(row => row.map(mapper).join(separator)).join('\n')
}
}
// Examples
const grid = Grid.fromRows('123\n456\n789'.split('\n'), Number)
console.log(grid.render(' '))
/**
1 2 3
4 5 6
7 8 9
*/
Let’s take Day 11 2021 from Avent of Code as an example. Our input is the following multi-line string of numbers:
7222221271
6463754232
3373484684
4674461265
1187834788
1175316351
8211411846
4657828333
5286325337
5771324832
The exercise is, I quote:
There are 100 octopuses arranged neatly in a 10 by 10 grid. Each octopus slowly gains energy over time and flashes brightly for a moment when its energy is full. Although your lights are off, maybe you could navigate through the cave without disturbing the octopuses if you could predict when the flashes of light will happen.
Each octopus has an energy level. The energy level of each octopus is a value between 0 and 9. Here, the top-left octopus has an energy level of 7, the bottom-right one has an energy level of 2, and so on. You can model the energy levels and flashes of light in steps. During a single step, the following occurs:
- First, the energy level of each octopus increases by 1.
- Then, any octopus with an energy level greater than 9 flashes. This increases the energy level of all adjacent octopuses by 1, including octopuses that are diagonally adjacent. If this causes an octopus to have an energy level greater than 9, it also flashes. This process continues as long as new octopuses keep having their energy level increased beyond 9. (An octopus can only flash at most once per step.)
- Finally, any octopus that flashed during this step has its energy level set to 0, as it used all of its energy to flash. Adjacent flashes can cause an octopus to flash on a step even if it begins that step with very little energy.
Given the starting energy levels of the dumbo octopuses in your cavern, simulate 100 steps. How many total flashes are there after 100 steps?
To count the flashes and solve the puzzle, we are going to start by instantiating a grid from the give input. We will make good use of the mapper parameter to transform each cell into an object instead of just a numeric value. Then we’ll simulate 100 cycles and accumulate the amount of flashes as we go.
type Octopus = { value: number; flashed: boolean }
const countFlashes = (input: string) => {
const grid = Grid.fromRows<Octopus>(input.split('\n'), value => ({
value: +value,
flashed: false,
}))
let flashes = 0
for (let i = 0; i < 100; i++) flashes += cycle(grid)
return flashes
}
The cycle
function implements the puzzle rules, making good use of our iteration methods (forEach
, and count
which we haven’t implemented here):
const cycle = (grid: Grid<Octopus>) => {
// 1. Increment the energy value of each octopus
grid.forEach(octopus => octopus.value++)
// 2. Process the flashes (recursively)
processFlashes(grid)
// 2b. Count how many octopuses flashed
const flashes = grid.count(octopus => octopus.flashed)
// 3. Reset the octopuses that flashed
grid.forEach(octopus => {
octopus.flashed = false
if (octopus.value > 9) octopus.value = 0
})
return flashes
}
The missing miece is our processFlashes
function:
const processFlashes = (grid: Grid<Octopus>) => {
const toIncrement: Coords[] = []
grid.forEach((octopus, coords) => {
if (!octopus.flashed && octopus.value > 9) {
octopus.flashed = true
// Not implemented here: the `surrounding` helper function returns the 8
// sets of coordinates surrounding the given set of coordinates
toIncrement.push(...surrounding(coords))
}
})
toIncrement.forEach(coords => {
const octopus = grid.get(coords)
if (octopus) octopus.value++
})
if (toIncrement.length > 0) processFlashes(grid)
}
That’s basically the gist of it, although there are many more things we can do (some of them already implemented in the GitHub version):
every
, everyColumn
, everyRow
, some
, someColumn
, someRow
…I hope this helps!
]]>For some reason, I couldn’t find anything noteworthy happening before May. 🙃
🇩🇪 May 18-21st. My partner and I went to Warnemünde, on the German north coast. It was colder than anticipated for that time of the year, but it was really nice to take long walks on the beach.
🎤 June 1st. I was invited to be a panelist at the Unlocking Inclusivity event by Xena, in Berlin. First time doing some public speaking in years, it was a lot of fun!
📦 June 22nd. I released version 8 of a11y-dialog with help from EJ Mason.
🇩🇪 June 23–25th. My partner and I went to Wernigerode, in the middle of Germany, with our godchild for a few days. First time seeing mountains (well, hills really) while living in Germany — what a treat! Lovely county, it was really nice.
🚶♀️ June 25th. That day marked 1 full year of walking over 5,000 steps (~3.8km) every day. Haven’t missed a day!
👩🏾💻 August. As part of the Empower Now mentoring program, I started mentoring Genefer Baxter, the founder of the Aula Future startup.
🇧🇪 September 6–10th. My partner and I went to Belgium for a few days. It’s my 2nd time in Belgium and first time in Ghent, which is a very pretty city. It was a lovely trip and I’d like to go back.
📍 October 13–15th. I attended the first Geoguessr World Cup in Stockholm, which was an absolute blast honestly. So much fun, what a precious and amazing event!
🌱 November 15th. This day marked 5 years for me without eating meat and 4 years without fish.
💻 December. I participated in Advent of Code again this year and managed to go relatively far, although there are certainly a lot of problems (especially in the later days) that I wasn’t able to solve. I’m also okay with that, and still had a lot of fun.
🏔 December 1–6th. I went back to the French Alps to celebrate my birthday with my family — double-special as my partner came along to enjoy family, cheese and practice their French.
🦍 December 22nd. My legal case with Gorillas following the layoffs from May 2022 ultimately came to an end, and with that I can finally close the Gorillas chapter.
Onto next year. :)
]]>In our live preview editor, we want to display the scenes boundaries in the video player track. A little bit like video chapters on YouTube if you will.
There are certainly plenty ways to build something like that, and I decided to implement it using CSS linear gradients. It was the simplest approach considering the rest of the code.
Conceptually, it’s not very difficult. We are going to use a CSS linear gradient with hard color stops to indicate each start and stop of a scene.
We have our own video player, which has a track bar styled with CSS. We instruct it to use a certain CSS custom property as background if defined, otherwise a solid color. It could look like this:
.Trackbar {
height: 10px;
width: 100%;
background: var(--preview-scene-markers, white);
}
Higher up our DOM/component tree and closer to the data layer, we have our scenes. It’s basically an array of objects that contain the start time and end time of each scene in the video.
We are going to loop over this array of scenes, and for each one, add some color stops to our gradient. Of course, for these stops to be visible, we need to create an artificial gap between 2 scenes: this is what the thickness
option does in the code below. It creates an extra strip a few pixel wide, transparent so that the track color doesn’t render there. This is how the gaps are done.
const getTrackMarkers = (scenes, options = {}) => {
const { trackColor = '#fff', thickness = 4, precision = 2 } = options
const markers = []
const totalDuration = scenes.at(-1)?.end
const halfThickness = `${thickness / 2}px`
// Return nothing, and not `none`, as we want the default value from the CSS
// custom property to be applied.
if (!totalDuration || scenes.length === 1) {
return
}
scenes.forEach(scene => {
const percent = ((scene.end / totalDuration) * 100).toFixed(precision)
// Marker start
markers.push(`${trackColor} calc(${percent}% - ${halfThickness})`)
markers.push(`transparent calc(${percent}% - ${halfThickness})`)
// Marker end
markers.push(`transparent calc(${percent}% + ${halfThickness})`)
markers.push(`${trackColor} calc(${percent}% + ${halfThickness})`)
})
return `linear-gradient(to right, ${markers.join(', ')})`
}
Finally, we can put this in a custom property in some upper container; doesn’t matter too much where and this may be very framework-specific.
const container = document.querySelector('.SomeContainer')
const styles = container.style
const gradient = getTrackMarkers(scenes)
styles.setProperty('--preview-scene-markers', gradient)
The output is pretty verbose though, and gets more and more bloated as the number of scenes increases. For a test project with 8 scenes, we get:
linear-gradient(
to right,
#fff calc(6.86% - 2px), transparent calc(6.86% - 2px),
transparent calc(6.86% + 2px), #fff calc(6.86% + 2px),
#fff calc(14.00% - 2px), transparent calc(14.00% - 2px),
transparent calc(14.00% + 2px), #fff calc(14.00% + 2px),
#fff calc(21.13% - 2px), transparent calc(21.13% - 2px),
transparent calc(21.13% + 2px), #fff calc(21.13% + 2px),
#fff calc(38.31% - 2px), transparent calc(38.31% - 2px),
transparent calc(38.31% + 2px), #fff calc(38.31% + 2px),
#fff calc(57.63% - 2px), transparent calc(57.63% - 2px),
transparent calc(57.63% + 2px), #fff calc(57.63% + 2px),
#fff calc(62.79% - 2px), transparent calc(62.79% - 2px),
transparent calc(62.79% + 2px), #fff calc(62.79% + 2px),
#fff calc(72.98% - 2px), transparent calc(72.98% - 2px),
transparent calc(72.98% + 2px), #fff calc(72.98% + 2px),
#fff calc(81.47% - 2px), transparent calc(81.47% - 2px),
transparent calc(81.47% + 2px), #fff calc(81.47% + 2px),
#fff calc(93.14% - 2px), transparent calc(93.14% - 2px),
transparent calc(93.14% + 2px), #fff calc(93.14% + 2px),
#fff calc(100.00% - 2px), transparent calc(100.00% - 2px),
transparent calc(100.00% + 2px), #fff calc(100.00% + 2px)
);
There are some things we can do to squeeze some bites out:
transparent
, rebeccapurpler
or other longer notations).calc()
expressions.At the end of the day, there is only so much we can do though. Linear gradients use a verbose syntax, and we need a lot of color stops to make hard cuts between scenes.
Roma Komarov actually found a few additional ways to compress the gradient value even further (see his CodePen for the code). I quote:
0
value to remove a bunch of duplication. The way it works: whenever the gradient has a color stop at a smaller distance value than the previous one, it uses the bigger of the two. The transition hint is basically a color stop without a color, so moving it to the “0” essentially makes the next color to start immediately from its full value.linear-gradient()
itself, making it so we won't repeat that part every time.Because it uses CSS, it’s not very accessible on its own. What I mean by this is that one would need to be able to see the track to notice the scene stops; it is not available to screen-readers or keyboard navigation.
However, the scenes can be navigated and browsed separately in our interface, so I believe it to be okay in that state. It’s a rather minor visual hint, which is not required to successful use the editor.
There are some interesting things we could do with that feature:
Tab
key. As a way around it, Cypress recommends cypress-plugin-tab, but that module is no longer maintained, not to mention a little flaky.
I recently implemented a proper automated test using cypress-real-events instead. Unfortunately, it does not work in Firefox since it relies on the Chromium remote debugger protocol.
As you’ll see, my test code makes very few asumptions about the way the skip link is implemented. Instead, it makes sure that:
Without further ado:
// Load the page
cy.visit('/')
// Ensure the skip link is not visible to start with
cy.get('[data-cy="skip-link"]').as('skipLink').should('not.be.visible')
// Press Tab once to enter the page
// See: https://github.com/dmtrKovalenko/cypress-real-events/issues/355#issuecomment-1365813070
cy.window().focus().realPress('Tab')
// Ensure the skip link is now focused and visible
cy.get('@skipLink').should('have.focus').and('be.visible')
// Interact with the skip link
cy.get('skipLink').realPress('Enter')
// Ensure the skip link is no longer focused or visible
cy.get('@skipLink').should('not.have.focus').and('not.be.visible')
// Press tab again and ensure the focus was moved to the main element
cy.realPress('Tab')
.focused()
.then($el => expect($el.closest('main').not.to.be.null))
I hope this helps! ✨
]]>Last year, I wrote about sharing code between a Sanity studio and the app it relates to by configuring Webpack aliases. Sanity v3 is no longer built on top of Webpack though; it uses Vite which uses Rollup.
It took me a long time to figure out that the path aliasing configuration needs to be defined in the sanity.cli.js
file and not the sanity.config.js
file. Admittedly, it’s a pretty niche feature — especially as the Webpack version was not documented on purpose. Still, I feel like this information could be useful in the migrating from v2 documentation as a small recipe.
import path from 'path'
import { defineCliConfig } from 'sanity/cli'
export default defineCliConfig({
api: {},
vite: config => {
if (!config.resolve) config.resolve = {}
if (!config.resolve.alias) config.resolve.alias = {}
config.resolve.alias['@'] = path.resolve(__dirname, '..', 'src')
return config
},
})
Any file which ends up including JSX — either directly or indirectly — now needs to have the .jsx
extension. I must say I’m not exactly sure why this is needed. It is probably possible to configure Vite to work around this, but I ended up renaming my files. Fortunately the error was very explicit and easy to address.
Style overrides are no longer possible besides just replacing some CSS custom properties with a custom theme. This is a bit of a shame because I used to overwrite some styles to make the studio more friendly/accessible.
Edit: I was wrong about the inability to apply custom styles. One can just import a CSS file in the sanity.config.js
file and have them applied globally.
import './global.css'
export default defineConfig({
/* … */
})
Unlike other configuration functions (document.actions
, document.newDocumentOptions
, document.productionUrl
), the studio.components.toolMenu
configuration function does not receive the context, which means it is not possible to get the current user.
Ideally we could do:
const isAdmin = currentUser =>
currentUser?.roles.some(role => role.name === 'administrator') ?? false
// This does not work: `context` is undefined.
{
studio: {
components: {
toolMenu: (props, context) => {
const tools = isAdmin(context.currentUser)
? props.tools
: props.tools.filter(tool => tool.name === 'default')
return props.renderDefault({ ...props, tools })
}
}
}
}
This makes it inconvenient to customize the available tools based on the user’s role. Right now we have to hack things together by storing the current user on the window object in some other function, which is tad awkward and prone to fail.
Similarly, the schema.types
configuration does not accept a function but an array of types. A function would make it possible to get the context, particularly the current user, to condition the search engine based on the user’s role.
// This does not work: `schema.types` expects an array, not a function.
{
schema: {
types: context => {
return schemaTypes.map(entity => {
if (entity.type === 'document' && !EDITOR_TYPES.includes(entity.name)) {
return {
...entity,
__experimental_omnisearch_visibility: isAdmin(context.currentUser),
}
}
return type
})
}
}
}
Sanity never had a built-in way to order documents within the studio. The general expectation is that documents should be programmatically sorted via the API based on their fields instead of manually in the interface.
Fortunately, there was the sanity-plugin-order-documents plugin that did just that. Unfortunately, it was a v2 plugin, however Sanity shipped its own official plugin for v3.
The new plugin documentation is a little thin to start with, which is in stark contrast with the rest of the Sanity environment which is generally exceptionally well documented.
Perhaps more problematic: while the old plugin added another page entirely to reorder certain document lists (as illustrated here), the new one injects new menus within the main desk tool which makes for an awkward experience.
For instance if you have an orderable “Category” entity type, you end up with a second menu called “Ordering Category” below it (or whatever you call it). And I’d be fine with it if that menu was there only to reorder entries, but that’s not the case: you can do full documents edit within that menu as well, which means you now have 2 places to do the same thing. I’m not sure what limitation they were fighting to cause the interface to be skewed like this.
The new official media plugin which is supposed to replace the incredible media library community plugin forces the dark theme while the rest of the studio correctly adapts to the current theme. This was reported in #86 so it will hopefully be addressed.
As of version 3, the readOnly
property no longer works if the field lives inside a fieldset — regardless of whether the fieldset is collapsed or not. It used to work fine in version 2. Peculiar bug I must say, because I wouldn’t have imagined a regression on a core form feature like this one.
This is now reported in #4124.
Edit: This was fixed very rapidly. Kudos to the Sanity team for reacting so quickly.
As you can see, it’s sometimes a little rough around the edges when doing things that are not super basic. That being said, version 3 brings a lot of super nice improvements. In no particular order:
part:
import paths, which is so much cleaner, nicer and easier for inter-operability with other tools. This monkey-patching of the Node resolution algorithm was madness, and I’m glad to see it gone.document.productionUrl
configuration intended to set up previewing systems can now be asynchronous, which was a pretty frustrating drawback in v2 requiring weird hacks.I’ll keep updating this article as I learn more about v3.
]]>The test suite is pretty slow though, since Advent of Code involves some brute-forcing exercises which take a long time to run. So the idea is to parallelize the tests so they don’t take as long.
I have 8 different folders: one folder per year since 2015. We can use a matrix to create multiple job runs in parallel.
strategy:
fail-fast: false
matrix:
year: [2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022]
Then in our step, we can access the name of the job (and the name of our folder) and pass it to our npm test
command (which uses Jest, or Mocha or Ava or whatever under the hood).
- name: Run tests
run: npm test -- ${{ matrix.year }}
name: Tests
on: [push]
jobs:
tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
year: [2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022]
steps:
- name: Check out repository
uses: actions/checkout@v2
- name: Unlock input files
uses: sliteteam/github-action-git-crypt-unlock@1.2.0
env:
GIT_CRYPT_KEY: $
- name: Set up Node.js
uses: actions/setup-node@v2
with:
node-version: 18
cache: npm
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test -- $
🧩 January. I got in a bit of an Advent of Code frenzy and did all the puzzles from all previous years, all the way back to 2015. It was fun, I learnt a lot, and sharpened my coding skills for this year.
🎤 January 20th. I was kindly invited by Sarah Dayan and Bryan Robinson from Algolia to appear on the Developer Experience podcast to talk about the overlap between User Experience and Developer Experience, particularly in terms of accessibility.
📦 January 31st. As a result, I open-sourced a small library called Circularray. It was a fun way for me to learn about double-ended queues (like the deque
Python module) and implement them from scratch in JavaScript.
🏳️⚧️ March 15th. First time seeing my parents in person since coming out to them. It wasn’t too easy, but also didn’t go too bad, so all in all I think it’s okay.
🤧 May 22nd. I caught COVID, after 2 and a half years of avoiding it, despite 3 shots. A few bad days of fever and flu symptoms, followed by tiredness for another week. Could have been worse to be fair, but certainly not fun.
🦍 May 25th. I got laid off from Gorillas along with 350 employees in an effort to reduce costs amid the company’s inability to raise sufficient money to keep operations running.
☀️ June 13–22th. My brother and I visited our sisters in the south of France. A week full of warm weather, sun, beach and sea wind. Delightful (although I was still recovering from COVID)!
🚶♀️ June 25th. I went on what became a daily walk of 30–60 minutes. Haven’t missed a day since then!
⚡️ September 1st. I joined EVA Global as a Senior Engineering Manager.
📣 September 16th. I became one of the first Sanity Ambassadors, kickstarting their new community program after having spent months involved in the community.
🚪 September 27th. I resigned from EVA Global. It just wasn’t for me, and I didn’t feel like extending my employment any longer.
🎥 October 1st. I joined Cofenster as a VP of Engineering.
🌆 October 11–14th. I visited Hamburg for the first time to meet the Cofenster teams. I gotta say, the city center is lovely when the weather in nice.
👗 November. I did an entire month wearing only dresses or skirts, because I felt like it and because I wanted to make it more normal for me to wear that kind of outfit. It was nice, except for the cold!
🏳️🌈 November 29th. I spoke at a webinar on Diversity, Equity & Inclusion in the workplace.
🌱 November 15th. This day marked 4 years for me without eating meat and 3 years without fish.
💻 December. I participated in Advent of Code again this year and managed to go all the way through although I have skipped a day or two due to overwhelming complexity (for me).
👋 December 19th. I got the opportunity to meet Hidde de Vries during my short trip in the Netherlands! Lovely to finally meet in person after having crossed paths so many times online.
Here is a technical write-up of my journey through that puzzle.
The first thing I’m happy about is that I managed to solve part 1 very rapidly. And that’s only because a former Advent of Code event had a similar problem which I couldn’t solve back then. It feels nice realising that I learn stuff. 😅
Let’s go through it.
First, we parse the input file into an object where keys are the name of our variables, and values are the numbers or expressions. Something like this:
{
"root": "pppw + sjmn",
"dbpl": 5,
…
}
This first step can be done in plenty different ways, but I’m used to Array.prototype.reduce
, so here goes:
const parseInput = input =>
input.reduce((acc, line) => {
const [name, value] = line.split(': ')
acc[name] = +value || value
return acc
}, {})
Once we have our map, the logic goes like this:
root
key.The code goes like this:
const getRootNumber = input => {
const map = parseInput(input)
while (typeof map.root !== 'number') reduceNext(map)
return map.root
}
const getNextNumber = map =>
Object.entries(map).find(([, value]) => typeof value === 'number')
const reduceNext = map => {
const [nextKey, nextValue] = getNextNumber(map)
for (let key in map) {
const value = map[key]
if (typeof value === 'string' && value.includes(nextKey)) {
map[key] = value.replace(nextKey, nextValue)
// This is not the most elegant, but it does the job. If the
// expression contains only numbers (e.g. `2 + 3`), it will
// resolve it (e.g. `5`), otherwise (e.g. `2 + eklr`), it will
// fail and do nothing.
try {
map[key] = eval(map[key])
} catch {}
}
}
delete map[nextKey]
}
Part 2 ups the ante: instead of finding the value of the root
key, we need to figure out which value for the humn
key would yield the right value to trickle down to the root
key.
Initially, I wasn’t too bothered by it. I thought I could reuse the code I wrote for part 1 by changing the value of the humn
key every time until we find the value that yields the correct result. My code looked a bit like this (follow along in the comments):
const getHumnNumberByBruteForce = input => {
let map = parseInput(input)
// We remove the `humn` key since it’s the actual key we are trying
// to figure out the value from.
delete map.humn
// While the exercise says to replace the `+` with an equality check
// (`==`) in the `root` value, we can instead replace it with a `-`
// sign so it returns `0` when we find the right value (e.g.
// `23622695042414 - 23622695042414`). This enables us to reuse the
// code from part 1 (which checks whether the `root` value is
// finally a number).
map.root = map.root.replace('+', '-')
// To speed things up, we first reduce the map as much as we can.
// Basically we deal with all keys which are mapped to numbers right
// away, so that we only focus on the dynamic expressions in the
// next loop.
while (getNextNumber(map)) reduceNext(map)
// We start our `humn` value at 0, run the code from part 1, and if
// it returns anything else but 0, we increment `humn` and repeat
// until we found the value that works.
let humn = 0
while (getRootNumber({ ...map, humn }) !== 0) humn++
return humn
}
Let’s start by saying that this code actually works. It yields the right result for the sample. The problem is that it’s unrealistic to hope to brute-force part 2 considering the answer has something like 14 digits. I went all the way up to 1,000,000 iterations in a few minutes until I decided to start looking at the numbers a little closer.
I put a console.log
right when we replace the last variable in the root
value. At this point, I’ve noticed that the root
value is an expression like this: root = lrnp === 23622695042414
. I’ve also noticed that even with a humn
value of 1,000,000+, I was very very far from matching that number.
So I kind of poked around manually by killing the process, updating the starting value of humn
to a super large number, and checking the log again to see how far I was. I’ve done that a few times, getting closer each time until my brute-force program managed to return the right result in a few seconds once the starting humn
value was close enough to the actual one.
Even though I managed to solve it with manually-assisted brute-force, I was curious how to figure it out the Right Way™. My gut feeling was that we may need to look at the input data instead, and find some sort of clue with the numbers.
So I printed the reduced map (the one with only ~80 expressions instead of 5000). I removed all punctuation symbols for clarity and ordered the operations from root
to humn
in a text file, like this:
root = lrnp === 23622695042414
lrnp = gdgf / 4
gdgf = 886 + zlwm
zlwm = 2 * pjcb
pjcb = 117205375899188 - mfvj
mfvj = 3 * hgfj
…
qdlz = 21 * dztn
dztn = 452 + humn
From there, I solved the equation manually by starting from the end value (23622695042414
), and applying each operation line by line (reversed though!). For instance, here is the logic for the first few lines:
lrnp
to be 23622695042414
for the root
expression to be truthy.lrnp = gdgf / 4
. Therefore, gdgf = lrnp * 4
. So gdgf
is 23622695042414 * 4
or 94490780169656
.gdgf = zlwm + 886
. Therefore, zlwm = gdgf - 886
. So zlwm
is 94490780169656 - 886
or 94490780168770
.pjcb = gdgf * 2
. Therefore gdgf = pjcb / 2
. So pjcb
is 94490780168770 / 2
, or 47245390084385
.mfvj = 117205375899188 - pjcb
, so 117205375899188 - 47245390084385
, or 69959985814803
.And so on until we reach a value for humn
(3429411069028
in my case). We basically reverse-engineered the formula by hand.
Of course doing it by hand is pretty cumbersome, not to mention error-prone. I had to start again twice because I made silly math mistakes. So we should try to write a function to do that for us.
Our function starts very similarly to the naive brute-force attempt: we parse the input into a map, remove the humn
key (since we’re looking for it), then reduce the map as much as possible so we get rid of all numeric values and have only expressions left.
Then we read the root
value as an entry point. This gives us the next key we should resolve (lrnp
), and the initial numeric value we work from (23622695042414
).
Then, we keep iterating until we have found the humn
key, updating our value along the way by reversing the operation (if a = b * 2
, then b = a / 2
). Ultimately, we end up with our result!
const getHumnNumber = input => {
const map = parseInput(input)
delete map.humn
while (getNextNumber(map)) reduceNext(map)
let value = +map.root.match(/(\d+)/)[1]
let curr = map.root.match(/([a-z]+)/)[1]
// We walk down the operation chain until we reach the `humn` key.
// The idea is that we reverse the current operation to find the
// previous number. For instance if we have `a = b / 4`, we can find
// `b` (the next one), by multiplying the current value by 4
// (`b = a * 4`).
while (curr !== 'humn') {
const [a, operator, b] = map[curr].split(' ')
// Expressions are always made of 1 number and 1 variable, but the
// order is not guaranted. So we need to check both to figure out
// which is which.
const next = isNaN(Number(a)) ? a : b
const number = !isNaN(Number(b)) ? Number(b) : Number(a)
if (operator === '*') value /= number
if (operator === '+') value -= number
// Small edge cases to deal with: if the expression is in the form
// of `a = x - b` or `a = x / b` where x is the number, the
// operation should actually *not* be reversed but kept as is.
// E.g. 10 = 20 / b is the same as b = 20 / 10, not b = 10 / 20
// E.g. 10 = 20 - b is the same as b = 20 - 10, not b = 10 - 20
if (operator === '/') {
if (!isNaN(Number(a))) value = number / value
else value *= number
}
if (operator === '-') {
if (!isNaN(Number(a))) value = number - value
else value += number
}
curr = next
}
return value
}
I like to read through the thread of answers on Reddit to learn how people solved problems.
Today, most people performed a binary search, which is a clever way to work around the performance problems of our naive brute-force solution. Basically the idea is to compose a mega math expression to begin with, and then to execute it with carefully selected values until we find the right result.
The expression can be generated relatively conveniently from our map:
const getExpression = (map, key = 'root') => {
const value = (map[key] || key).split(' ')
return value.length === 1
? value
: '(' + value.map(p => getExpression(map, p)).join(' ') + ')'
}
When executed against the reduced map (the one with only expressions), it spits out a monstrosity like this, with the humn
variable in the middle.
((886 + (2 * (117205375899188 - (3 * (((((((((338 + (((5 * (995 + (((((2 * (((((694 + ((7 + (((5 * (((858 + (((815 + (((((2 * (282 + ((528 + (((4 * (((((2 * ((((((2 * (867 + (((21 * (452 + humn)) - 886) / 2))) - 513) / 5) + 859) / 2) - 153)) - 727) + 677) / 2) + 343)) - 287) * 4)) / 4))) - 852) / 10) - 542) * 17)) / 2) + 922)) / 7) - 854)) - 175) / 5)) * 2)) * 9) - 972) / 3) + 51)) - 850) / 2) - 388) / 2))) - 171) * 2)) / 2) + 853) / 3) - 789) * 2) - 118) / 3) + 155))))) / 4) - 23622695042414
From there, the idea of a binary search (from what I understand of it) is that you start with a very very high value to make sure you hit too high. Then you divide your value by 2 and you try again. Depending on whether you hit too high or too low, you divide the relevant gap by 2 again and again until you find the right value.
For instance if you’re asked to guess for a number between 0 and 100 and all you get is “higher” or “lower”, you start by saying 50 (half the gap). If you hear “higher”, you then ask 75 (half the gap). If you hear “lower”, you ask 62 or 63 (half the gap), and so on.
It’s generally very very fast and efficient. Significantly more than my version (which takes about 1 second to run on my M1 laptop).
So I ended up solving this one 3 times: one time by combining brute-force with some manual poking around (my own clumsy version of binary searching), one time by hand entirely, and one time programmatically from what I learnt in the manual version. And then after that I learnt more about implementing a binary search by reading other people’s solution.
It felt very good getting to the bottom of it and was pretty fun overall! ✨
]]>I want to comply with the author’s wishes, but I also want to be able to just run my code for all the days I participated in without having to manually copy and paste my inputs from the site.
Fortunately, the aforementioned Reddit thread mentions git-crypt, a piece of software that can encrypt and decrypt files on push and pull respectively. This way, I have all my input files locally but when pushing them to my repository, they get encrypted so that they’re not actually readable or usable by anyone else.
Here is a quick and dirty article on doing just that.
First we install git-crypt
. I’m on macOS so I install it via Homebrew, but this may vary by operating system of course.
brew install git-crypt
Then within the repository folder, we initialize git-crypt
and export our secret key in a file.
cd aoc
git-crypt init
git-crypt export-key ./aoc-gitcrypt.key
Then we make sure never to commit that key file by adding it to the .gitignore
. I also backed it up on 1Password so that I can share it across my devices if needed.
# .gitignore
aoc-gitcrypt.key
Then we tell Git which files should be encrypted with a .gitattributes
file:
# .gitattributes
**/input.txt filter=git-crypt diff=git-crypt
That’s basically it for the setup. We can commit these changes so that in the future any input.txt
file gets encrypted on push and decrypted on pull (if we have our key).
Now that the setup works, we need to encrypt files that are already there because they’re still plain text right now. This can be done with the following command:
git-crypt status -f
So far so good. Unfortunately, all previous commits in our git history contains the raw text files so we need to rewrite the history to clean that up. The clean way to do it is with git-filter-repo but who has time for that.
Let’s hack it with git filter-branch
although it’s not recommended. Please make sure to back up your repository before trying anything.
git filter-branch --index-filter 'git rm --cached --ignore-unmatch **/input.txt' HEAD
git push --force
This works well! All the input.txt
files got removed from all the commits in our git history. The problem is that they also got removed from the HEAD so we no longer have any input file.
This is why it’s nice to have backed up the folder before. We can now copy over all the text files from the backup onto the actual repository. I wrote a quick and dirty bash script for that:
# aoc-backup/migration.sh
find . -name "input.txt" -print0 | while read -d $'\0' file
do
echo "Restoring input file at $file"
cp $file "../aoc/$file"
done
Now we’ve brought back all our input files onto our repository. The last thing to do is to encrypt them once more before pushing them.
git-crypt unlock ./aoc-gitcrypt.key
git-crypt status -f
git commit -am "Restore encrypted input files"
git push
And that’s about it. We cleaned out our history from all input files, then we brought them back, encrypted them and pushed them. The only downside of this approach is that we can checkout an old commit and run our code since the input files are missing but oh well. It’s not like we were going to do that anyway.
I hope this helps!
]]>Having a nice and clean terminal is important (to me at least) and I’ve done my fair share on copying and pasting configuration snippets until I was happy enough over the years. For some reason, I decided to dig into exactly how things work under the hood, and since this is all new to me, I thought I’d write about my findings.
This blog post is a little unusual because I’m writing it as I’m doing research, so it may not be very straightforward and potentially contain inaccurate information. It’s a “learn-by-teaching” kind of thing so please, kindly point out any mistake to me on Twitter (or edit this blog post on GitHub directly).
Setting up autocompletion for Git (for branch names for instance) used to be a little tricky, but with zsh on new macOS versions, it can be done by adding the following line to one’s ~/.zshrc
file (the configuration file for zsh).
autoload -Uz compinit && compinit
I didn’t know what autoload
is, so I dug a little. It appears autoload
is a Z shell utility to load code, specifically functions. This StackOverflow answer gives a bit more detail into what exactly it does, so I won’t go too deep into it here.
And with autoload
, we load compinit
. It looks like compinit
is the completion system from Z shell. Allow me to quote the docs:
To initialize the system, the function
compinit
should be […] autoloaded (autoload -U compinit
is recommended), and then run […] ascompinit
. This will […] re-define all widgets that do completion[…].
In other words: we instruct Z shell to use its loading module to load its completion system so that we can benefit from autocompletion, particularly for Git purposes. Neat.
I really enjoy seeing the name of the branch I am on as part of my terminal prompt. It brings clarity and saves me from mistakes. This is made possible with the vcs_info
module. Just like we did for the completion module, we need to load this module by adding this line to our ~/.zshrc
file:
autoload -Uz vcs_info
This, however, is not changing our prompt. It’s just letting us access the VCS (Version Control Software) information (typically Git, but perhaps SVN or Mercurial). Now we need to do something with it.
I am clueless, but fortunately zsh comes with nice prose about this very feature, and as they explain, there are plenty ways to achieve this. They say the easiest way to update one’s prompt with the VCS info is to—and I quote:
[…] drop a
vcs_info
call to yourprecmd
(or into aprecmd_functions[]
entry) and include a single-quoted${vcs_info_msg_0_}
in yourPS1
definition.
precmd() { vcs_info }
setopt PROMPT_SUBST
PS1='%3~ ${vcs_info_msg_0_} '
Okay. 🙃 Let’s try to understand what that means.
First, precmd
appears to be nothing more than a function that gets executed before every command we run in the terminal. We can verify this by adding an echo
statement to it and see it printed out every time we type in any command. Cool.
So what do we do in that pre-command hook? We call vcs_info
, which I can only assume grants us access to the VCS information. I think it exposes a variable called vcs_info_msg_0_
(amongst others) which contains the branch name. We can confirm that by commenting out that line (or emptying the function body) and restarting the terminal: the prompt no longer contains the branch name.
Then, while the documentation doesn’t explicitly tell us to run setopt PROMPT_SUBST
, it actually includes that line in the code snippet, so let’s have a glance at it. Looking at the documentation, it says:
If the
PROMPT_SUBST
option is set, the prompt string is first subjected to parameter expansion, command substitution and arithmetic expansion.
What that means is that without that option enabled, ${vcs_info_msg_0_}
gets printed literally, instead of replaced by the actual name of the branch. So we need to turn it on in order for it to work.
Finally, the actual prompt. PS1
(or PROMPT
, both refer to the same variable) is the variable defining how our terminal prompt looks like. In the example above, %3~
is the path to the current folder (to a maximum depth of 3 folders), and ${vcs_info_msg_0_}
is our VCS branch name.
Let’s pimp that up though. Here is mine:
PROMPT='%(?.%F{green}●.%F{red}●%f) %F{211}%1~%f ${vcs_info_msg_0_} '
It’s a bit of a beast though, so let’s break that down into digestible chunks:
%(?.%F{green}●.%F{red}●%f)
is a ternary expression:
?
means the exit status of the previous command. It returns true
if the previous command exited successfully..
acts as a separator in the ternary expression. Everything between the two .
is evaluated when the condition is truthy; everything after the second .
otherwise.%F{…}
updates the text color (if supported by the terminal); here to green (and red later on in the expression).●
is a litteral character we want to print. It’s just a little bullet we use as an indicator.%f
restores the text color to the default one.%F{211}%1~%f
prints the current directory in pink.
%F{211}
updates the text color to a lovely pink.%1~
is the name of the current directory (technically the current path to a single directory deep).%f
resets the color to the default one.${vcs_info_msg_0_}
prints the VCS name and the name of the current branch; something like (git)-[main]-
.
To make that last part a little better, we can run the following command (before defining our prompt):
zstyle ':vcs_info:git:*' formats '%F{153}%b%f'
zstyle
is a Z shell module to do styling. The way I understand the first argument is that it essentially acts as scoping. Here, it says that we want to apply styling/formatting for anything within the git
scope of the vsc_info
module.
In the formats
argument, %F{153}
is a light blue color code, %b
stands for the branch name, and %f
resets the text color to the default one, as always.
Changing the color of the bullet (●
) from red to green might not be sufficient if you’re color-blind. In that case, you could use different characters, like ✓
and 𐄂
.
Z shell also exposes a PS2
(or RPROMPT
) variable to customize what appears on the right side of line, if anything. I personally like to display the time of the day here. This way I know when I executed a command.
RPROMPT='%F{245}%*%f'
By now, you should be able to get the gist of such an expression:
%F{245}
updates the text color to a medium grey.%*
is the current time.%f
resets the color to the default one.Phew! There we have it folks. A customized zsh prompt that actually makes sense. Well, for the most part that is. I hope this post was instructive! If you’re looking to make the move from bash to zsh, Armin Briegel has a fantastic series on moving to zsh (and even a book).
Here is the full code snippet.
]]><template>
element and how it can come in handy.
So to put it simply, the <template>
HTML element is intended to store HTML that is not yet used. The element itself and all its content are invisible, so it can be basically anywhere in the document without much risk. Although you’d typically have your templates at the root level.
Let’s start with the fact that <template>
do not enabling you to do anything that’s not possible otherwise. In that way, it’s more of a convenience tool really. If you have significant HTML structures that need to be injected at runtime, it might be very cumbersome to do so manually with document.createElement
and element.setAttribute
.
In Manuel’s case, he uses a template to hold a button that needs to be injected when JavaScript is finally available, as it wouldn’t work before that. Creating that button manually in JS with the SVG and all with be quite cumbersome. It would also violate proper separation of concerns by moving HTML into the JS logic.
<template id="burger-template">
<button type="button" aria-expanded="false" aria-label="Menu" aria-controls="mainnav">
<svg width="24" height="24" aria-hidden="true">
<path d="M3 18h18v-2H3v2zm0-5h18v-2H3v2zm0-7v2h18V6H3z">
</svg>
</button>
</template>
Once you have a <template>
element in your HTML, you can access it in JavaScript and clone it to render it wherever you want.
const template = document.querySelector('#id-of-template')
const content = template.content.cloneNode(true)
container.append(content)
It’s also not limited to a single use. You can create as many clones as you want. The MDN page has a good example of storing a table row in a template so you can easily clone and add a new row on demand.
For instance, Sass Guidelines use templates to inject links to edit view or edit each chapter on GitHub directly. In an ideal world these links would be there all the time, but because Sass-Guidelines is built from plain ol’ Markdown files, these links are generated in JS. This is the pull-request that implemented templates.
The browser support is surprisingly good. Almost 98% of the current landscape supports it, so feel free to go nuts. And if you have to support older agents, you can test for support like this:
if ('content' in document.createElement('template')) {
// `<template>` is supported.
}
Following this article, some people asked what would be the difference with using a hidden DOM element, such as a <div>
to hold our template content. After all, it feels similar?
<!-- Don’t do that, it’s just not as good or safe. -->
<div id="template" style="display: none;">
<!-- Template content here -->
</div>
There are a few reasons why using a <template>
is better—some better than others (thanks to Spankalee for outlining a few I didn’t think of)—so pick what is most convincing to you:
<template>
is inert: images and scripts do not load, styles do not apply, elements are not queried, etc.<template>
can safely contain a <td>
, <li>
or <dd>
without a validator complaining. Similarly, a <template>
can be rendered virtually anywhere, which may not be the case for a <div>
.<div>
is not really bulletproof. On the other hand, chances are that the <template>
element will always be hidden, even without CSS. The hidden
HTML attribute is probably a better choice if you go that route.<template>
is just more semantic and obvious in intent that a hidden container if you ask me, which may be particularly relevant for third-party tools, extensions et al.Long story short, templates are good to avoid creating complex DOM structures by hand. For a single node, using the built in manipulation methods is fine, but for anyting more complex, you probably want to store the HTML blob as is and just clone and fill it when needed.
As always, I hope it helps!
]]>The app has been rebuilt since and I cannot find old screenshots, but at the time registering for N26 consisted on filling a form that displayed one field at a time. So you’d have a dozen steps, and each view consisted of a title, a description, one or more related fields, and a confirm button at the bottom to move to the next step.
On the address confirmation step however, we had 2 side-by-side buttons at the bottom: one primary button to confirm your address and move on, and one secondary button to edit your address and go back to the previous step.
What we wanted was for the primary button to be on the side of the dominant hand. So on the right for a right-handed user, and on the left for a left-handed user.
The way we thought we could detect the user’s dominant hand was by recording and scoring taps based on whether they occur on the left- or right-side of full-width buttons. We assumed (perhaps incorrectly) a right-handed user would tap buttons towards the right side of the screen, while a left-handed user would tap buttons between the left edge and the center.
We wrote a script that would intercept taps happening on elements considered full-width and check on which side they occurred, and give them a score between -1 (left edge) to +1 (right edge). As we recorded more taps, we would make that score more and more accurate.
I dug and found the code we wrote. However, it was a higher-order component and used React classes, so I refreshed it to use hooks. Here is what it looks (sorry for React, I don’t have the energy to move it to plain ol’ JavaScript):
// Returns a number between -1 and +1 to convey the guessed dominant hand, with
// -1 being left side and +1 being right side.
const useDominantHandScore = ({
// Arbitrary maximum screen width to compute score for; anything beyond that
// is considered not a mobile device and thus discarded
maximumScreenWidth = 500,
// Threshold above which an element is considered full-width (80% by default)
// and can be a candidate for tap recording
fullWidthThreshold = 0.8,
} = {}) => {
const viewportWidth = useViewportWidth()
const [tapScore, setTapScore] = React.useState(0)
const [tapCount, setTapCount] = React.useState(0)
const handleTap = React.useCallback(
event => {
const consideredFullWidth = viewportWidth * fullWidthThreshold
const targetWidth = event.target.offsetWidth || 0
// If not on mobile (not a great check but heh) or not a click event or
// not a tap on a full-width element, do nothing
if (viewportWidth > maximumScreenWidth) return false
if (event.clientX === 0 && event.clientY === 0) return false
if (targetWidth < consideredFullWidth) return false
setTapCount(count => count + 1)
setTapScore(score => score + getTapPosition(event))
},
[viewportWidth, maximumScreenWidth, fullWidthThreshold]
)
React.useEffect(() => {
document.addEventListener('click', handleTap)
return () => {
document.removeEventListener('click', handleTap)
}
}, [handleTap])
return tapScore / tapCount || 0
}
function getTapPosition(event) {
const percentage = Math.round(
((event.clientX - event.target.offsetLeft) / event.target.offsetWidth) * 100
)
// Convert the percentage (0–100) to a number on the -1/+1 scale
return (percentage - 50) / 50
}
You can play with the demo on CodeSandbox. Be sure to resize the window so the browser panel is at most 500 pixels wide, since it’s the threshold we use for detection.
Something we have done back then (although I couldn’t figure out how or where) is bringing in a concept of reliability. In that regard, there are two things to consider:
In the Twitter thread, I expanded on how I think it could be interesting to have this as an operating system feature, akin to the reduced motion mode or the light/dark switch. For instance, the reMarkable tablet asks for the user’s dominant hand during the setup process.
Once it’s an OS setting, it can be conveyed by the browser via a media query. Let’s say, “prefers-dominant-hand”. It would have 3 values: left
, right
and no-preference
. From there, you could adjust your designs based on the value of this media query:
.FloatingButton {
position: fixed;
top: 0;
right: 0;
}
/* This is not a real thing; it’s only for demonstration purposes */
@media (prefers-dominant-hand: left) {
.FloatingButton {
left: 0;
}
}
In the original thread, I reflected on the fact that left and right are notions CSS is trying to navigate away from, preferring directional properties (e.g. margin-inline-start
instead of margin-left
or flex-end
instead of flex-right
).
Holger thus suggests imagining start
and end
as values to the potential media query instead, which could mean left/right OR right/left depending on the context (LTR or RTL). So LTR + right thumb = start, RTL + right thumb = end, and so on.
On another note, Tim Severien chimed in suggesting that we might not want to adapt our interfaces based on arbitrary user traits and instead provide the option for our users to adjust settings to suit their needs.
A bit nitpicky here, as it’s a mere naming thing, but I generally find adapting based on user traits troublesome as they’re full of biases. I’m a left-handed writer but ambidextrous smartphone user.
Similarly we don’t adapt UI based on vision, but remove the assumptions and allow users to choose light/dark preference, I’d adapt navigation on navigation side preference. I guess that also removes the complexity of RTL.
Regardless, it’s definitely fun idea. I’m curious to learn how designers would deal with this. For example, some designers avoid floating buttons to overlap other items. I guess the setting would add a new challenge and opportunity to get creative, which is fun!
— Tim Severien on Twitter
Kilian Valkhof also mentioned how having the ability to register arbitrary media queries would be great. The browser would then provide a built-in interface to tweak these settings, which can then be accessed back with media queries.
Imagine an API that lets sites register arbitrary media features and the browser then exposes a UI for them automatically. 🤩
It would even make known media features like prefers-color-scheme better (with a browser UI toggle) and make customization more discoverable and easier.
— Kilian Valkhof on Twitter
For instance, let’s imagine authoring this code snippet:
// This is not a real thing; it’s only for demonstration purposes
CSS.registerMedia({
name: 'prefers-dominant-hand',
syntax: 'start | end | no-preference',
initialValue: 'no-preference',
})
The browser would then provide a native interface for the user to define their dominant hand (if they wish to do so). If/when they’ve done that, we can read the updated value with the media query suggested above. Now wouldn’t that be neat? Maybe something for the Web We Want. :)
Kilian expanded on this idea on his own blog since I wrote this article. Be sure to have a read!
It’s unclear whether dominant-hand design is something worth exploring. It’s been in the back of my head since we played with this late 2016, and I haven’t seen anything about this concept since (or before for that matter). I still wonder whether this is a great idea or a terrible one.
If you’d like to consider it, hit me up. I’d love to see a clean implementation (maybe with a small open-source library?) and the results it yields. I’m sure it would be interesting. ✨
]]>Content warnings are notices preceding potentially sensitive content. This is so users can prepare themselves to engage or, if necessary, disengage for their own wellbeing. Trigger warnings are specific content warnings that attempt to warn users of content that may cause intense physiological and psychological symptoms for people with post-traumatic stress or anxiety disorder (PTSD).
In this blog post, we’ll see how to author a component for hiding content behind a warning. I’m using the word “component” broadly here, because it’s really just HTML and CSS. Feel free to wrap it in a React, Vue, Web Component (done that last one for you) or whatever floats your boat.
Functionally speaking, it’s pretty easy: we want to show there is hidden content, and we want to disclose why it’s hidden. From there, the reader can decide whether they want to expand the content or no.
This is the perfect use case for a good ol’ <details>
/ <summary>
combo. Here is how it looks:
<details class="ContentWarning">
<summary><strong>⚠️ Content warning:</strong> Food</summary>
<img
src="https://images.unsplash.com/photo-1561758033-d89a9ad46330?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MjB8fGJ1cmdlcnxlbnwwfHwwfHw%3D&auto=format&fit=crop&w=900&q=60"
alt="Juicy burger and fries on a wooden board"
/>
</details>
At that stage, I’d argue it’s basically good enough as it checks all the boxes. It hides the content, warns about the risks, and provides a way to access said content. It also works without JavaScript which is fantastic. What more to ask?
Because we rely on <details>
and <summary>
, styles are are purely cosmetic and do not really serve much functionality. Therefore, feel free to customize them the way you see fit.
.ContentWarning {
border: 1px solid rgb(0 0 0 / 0.3);
border-radius: 0.2em;
}
/**
* 1. Remove the default arrow that comes with the `<summary>` element
* 2. Make the toggle feel clickable with a hand cursor
* 3. Increase the hitbox of the toggle for ease of action
* 4. Give the toggle a striped background to make it stand out
*/
.ContentWarning > summary {
list-style: none; /* 1 */
cursor: pointer; /* 2 */
padding: 1em; /* 3 */
--stripe-color: rgb(0 0 0 / 0.1); /* 4 */
background-image: repeating-linear-gradient(
45deg,
transparent,
transparent 0.5em,
var(--stripe-color) 0.5em,
var(--stripe-color) 1em
); /* 4 */
}
/**
* 1. Tweak the stripes color on hover/focus to indicate that interacting with
* the toggle will disclose the sensitive content
*/
.ContentWarning > summary:hover,
.ContentWarning > summary:focus {
--stripe-color: rgb(150 0 0 / 0.1); /* 1 */
}
I voluntarily omitted some styles for sake of simplicity, so you can see the result in the following CodePen:
See the Pen Untitled by Kitty Giraudel (@KittyGiraudel) on CodePen.
As you might have noticed, this is a very under-engineering solution—which I personally like. We could go further though. Here are a few options:
We could provide a bit more context about why the content is not displayed by default. For instance, Twitter states “The Tweet author flagged this Tweet as showing sensitive content.”
Instead of using the disclosure pattern, we could blur the content. I am a little on the fence with this approach because a) it would require JavaScript and b) I feel like blur is either cosmetic or needs to be subtle enough that you can sort of guess the content behind; which would defeat the purpose of a content warning. On the flip side, it would avoid the content moving up and down due to the change of height.
We could provide an option to no longer mark content from this category as sensitive. Once stored in local storage or something, this would skip the whole widget if the theme is not deemed sensitive by the reader.
First, let’s create a .github/workflows/backup.yml
file in our repository. This will contain the instructions for GitHub to perform our backup.
Let’s say we want to run our backup every 5 days. This is doable with a cron job, which is essentially a Unix term for “scheduler”. The cron syntax is notoriously obscure, so I recommend using a tool such as crontab guru to assist. We eventually land on this expression: 0 10 */5 * *
, which the aforementioned tool translates into:
At 10:00 on every 5th day-of-month.
name: Sanity backup
on:
schedule:
- cron: '0 10 */5 * *'
Now that we named our workflow and defined when it runs, we need to write some steps. Here is what we want to do:
Cloning a repository is done with the official checkout GitHub action. The default options should be enough for us.
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
Performing the backup is certainly the most complicated step of our workflow. We need to use the Sanity command line tool to perform an export of our dataset with its name (here production
). You’ll note we specify the directory for our studio—here studio
—as we need to execute that command within the context of a Sanity project.
We also need to pass 2 environments variables:
SANITY_STUDIO_API_PROJECT_ID
. It can be found in the settings and is not private or sensitive, so we can safely inline it in our code.SANITY_AUTH_TOKEN
to have sufficient permissions for the backup. This secret needs to be defined as a GitHub secret (although the name can be changed).- run: npx @sanity/cli dataset export production backup.tar.gz
working-directory: studio
env:
SANITY_STUDIO_API_PROJECT_ID: theProjectId
SANITY_AUTH_TOKEN: ${{ secrets.SANITY_BACKUP_TOKEN }}
Finally, we want to upload the resulting tarball to GitHub. This can be done with the official upload-artifact action. We need to give it a name (for the interface), the path to our file, and for how long we want to store it.
- uses: actions/upload-artifact@v2
with:
name: production
path: studio/backup.tar.gz
retention-days: 5
That’s it really. From there, you can download your backup from the action interface itself, directly on GitHub.
Short and sweet as advertised. Now you can rest assured you won’t lose your data!
]]>*
) to denote required fields on the web. There is little historical content about this out there, but Jared Spool pointed out that this design decision appears to predate the web:
[I]t predates the web. I've seen instances of an asterisk to indicate a required field on mainframe data entry screens from the 70s. So, it's a pretty old convention. As to where it first showed up or who was the mastermind behind it, I'm without a clue.
— Jared Spool on UX Stack Exchange
It may also be derived from the print industry commonly using it to mark footnotes. Regardless where it came from, it’s safe to say the asterisk character is now a well established convention, so much so that we get to ask ourselves: is the symbol enough in itself?
I was recently doing an accessibility review and someone asked me what was the ideal markup for denoting required fields in web forms. I initially thought it to be a no-brainer, but after digging a bit, I realised there is a room for interpretation.
The first version is by far the simplest. All we do is append an asterisk to the label itself.
<label for="name">Name *</label> <input type="text" id="name" required="" />
People will see the star and understand that the field is required. In theory, people using screen-readers should hear the asterisk character. However, as Denis Boudreau explains, the asterisk symbol is not part of the characters that are naturally conveyed at the default verbosity level of most screen readers. So it might be skipped altogether.
That being said, the field has the required
attribute, so it will be announced as such by screen-readers. Because of that, we could consider removing the asterisk from the accessibility tree, like so:
<!-- Not ideal, see below -->
<label for="name">Name <span aria-hidden="true">*</span></label>
<input type="text" id="name" required="" />
I discussed this approach with Hidde de Vries and he suggested that it may be useful for screen-reader users to know that there’s a star in the label. For instance, if before the form there’s an intro that says “Fields marked with a star (*
) are required” or something along those lines—a recommended technique (G184) to satisfy Success Criterion 3.3.2: Labels or Instructions.
Scott O’Hara points out that tweaking the wording to avoid focusing on the star character (e.g. “Fill in all required fields (*
)”) might be a good idea. Then the required
attibute (or aria-required
) indicates the required nature of the field without having to try and make someone look for the *
which may not even be announced.
A minor improvement we could consider is adding a title
attribute to the star to give it more context. The title
attribute is not picked up consistently by assistive technologies, so this is more of a usability tip than an accessibility one—although it appears to be a recommended WCAG technique (H90).
<label for="name">Name <span title="Required field">*</span></label>
<input type="text" id="name" required="" />
A different approach I have seen used (and have implemented myself) is to vocally indicate that the field is required by providing visually hidden assistive text as part of the label.
<label for="name">
Name
<span aria-hidden="true" title="Required field">*</span>
<span class="sr-only">Required field</span>
</label>
<input type="text" id="name" required="" />
The potential issue with this approach is that the requirement might be voiced twice. First as part of the label, then via the required
HTML attribute. Better safe than sorry I guess, but overly verbose output could be tedious for screen-reader users.
Well, there is no definitive answer (as it’s often the case).
The first approach is certainly simpler, but may lack clarity with less savvy audiences. If used, the form should contain information stating that fields marked with a star are required (ideally at the top, before any required field).
The second approach is more explicit, but maybe too explicit for screen-reader users who may experience double output. I’ll be transparent and admit that I haven’t extensively tried it, so results may vary based on the assistive technology.
This article outlines other variations using an icon instead of the asterisk character, or relying on the aria-describedby
attribute. Søren Birkemeyer recommends flipping the problem on its head and marking optional fields instead, when everything is required by default. This method apparently yields positive results for their clients. As you can see, there are plenty ways to tackle that problem.
Ultimately, you can pass WCAG SC 3.3.2 and provide a good user experience to everyone with any technique, provided it’s implemented correctly.
]]>In this article, we’ll discuss how to make it possible to mark text snippets as expressed in a different language than the rest of the content. This is particularly important for people relying on screen-readers as the proper demarcation of language may trigger a vocal dictionary switch.
Success criterion 3.1.2 of the Web Content Accessibility Guidelines, called “Language of Parts”, states:
The human language of each passage or phrase in the content can be programmatically determined except for proper names, technical terms, words of indeterminate language, and words or phrases that have become part of the vernacular of the immediately surrounding text. (Level AA)
In other words, for any bit of text content on a page, it should be possible to determine its language. This is typically done at the document level via the lang
attribute on the <html>
element. For instance, this website is in English so the <html>
element has a lang="en"
attribute.
But it can also be done on a very local part of the document, to demark a single sentence or even word as being in another language.
Taking the first example from the WCAG 3.1.2 page: “He maintained that the DDR (German Democratic Republic) was just a ‘Treppenwitz der Weltgeschichte’.” That last part should be mark as being German. Like this:
<p>
He maintained that the DDR (German Democratic Republic) was just a
‘<span lang="de">Treppenwitz der Weltgeschichte</span>’.
</p>
This way, when a screen reader encounters the German phrase, it changes pronunciation rules from English to German to pronounce the word correctly, instead of butchering them using the English dictionary.
The documentation for this success criterion outlines perfectly why doing this matters, so much so that I’ll just borrow directly from there:
Additionally, Hidde de Vries rightfully pointed out that section B.2.1.1 of the Authoring Tools Accessibility Guidelines expects tools to make it possible to comply with the WCAG.
The authoring tool does not place restrictions on the web content that authors can specify or those restrictions do not prevent WCAG 2.0 success criteria from being met.
You can find a more digestible and human-friendly version of the ATAG on Hidde’s website.
There is unfortunately no out-of-the-box way to annotate a bit of text as being in a certain language with Sanity’s Portable Text editor. However, it is extensible with custom annotations, so that’s what we’re looking at here.
Let’s start with a very basic schema definition for some Portable Text.
export default {
title: 'Content',
name: 'content',
type: 'array',
of: [
{
type: 'block',
marks: { decorators: [{ title: 'Strong', value: 'strong' }] },
},
],
}
We want to add a custom annotation to mark text snippets as being expressed in another language than the rest of the document.
export default {
title: 'Content',
name: 'content',
type: 'array',
of: [
{
type: 'block',
marks: { decorators: [{ title: 'Strong', value: 'strong' }] },
annotations: [languageSwitch],
},
],
}
Now onto our annotation object. It needs a name (lang
for simplicity, but feel free to call it whatever you want), and a text field to specify which language code it is.
As per the HTML specification, the lang
attribute expects a “language tag” following the RFC 5646 (also known as BCP 47 apparently—who knew). There are some good validators for this format out there, but I decided to go with something simple and flexible: some letters, optionally followed by an hyphen and some more letters. For instance, de
or en-GB
. To better understand language tags, I recommend this dedicated section on MDN.
const languageSwitch = {
title: 'Language switch',
name: 'lang',
type: 'object',
fields: [
{
title: 'Language tag',
name: 'tag',
type: 'string',
validation: Rule =>
Rule.required().regex(/^[a-z]+(-[a-z]+)?$/i, { name: 'language tag' }),
},
],
}
Finally, we might want to customize how it looks in the rich text editor. This can be done via the blockEditor
object of options. I picked the MdTranslate
icon from the Material Design icon library.
blockEditor: { icon: MdTranslate, render: Lang },
And we can write a small component to specify the way the snippet is rendered within the rich text:
const Lang = props => (
<span title={`Content expressed in “${props.tag}”`} lang={props.tag}>
{props.children}
</span>
)
It might be tempting to prefix or suffix the content with a little flag in the rich text editor, however remember that flags are intended for countries and localities, not for languages. So it’s probably best not to.
So far we’ve only worked on the authoring experience. We need to make sure our frontend understands that custom annotation and renders a span with the right lang
attribute.
import { PortableText } from '@portabletext/react'
const COMPONENTS = {
/* All your component definitions … */
marks: { lang: Lang },
}
const Lang = props => <span lang={props.value.tag}>{props.children}</span>
const RichText = props => (
<PortableText value={props.content} components={COMPONENTS} />
)
That’s essentially it. To summarize, all we did was adding a custom annotation to our Portable Text schema so we can mark snippets of text as being expressed in a different language than the rest of the document. Then in our frontend, we made sure these nodes are rendered as <span>
elements with the correct lang
attribute.
I’m going to use this blog post to dump a bunch of advice on how to best get support, because one thing became apparent after going through literally hundreds and hundreds of requests: a lot of people have no clue how to ask for help efficiently. So let’s go through some basics, shall we?
The fastest support request is the one that doesn’t happen. Before jumping onto GitHub, Slack, Stack Overflow or whatnot, spend some time researching your problem. Give it a few Google searches. Browse through relevant GitHub issues, or even the code itself. Look around for something similar. Chances are that someone had a similar situation already, which has been addressed.
It kind of goes without saying, but you’d be surprised at how many people don’t even bother acknowledging that whoever will answer is a human being with their own day and their own stuff going on.
So you know, start your message by saying hi or something. Similarly, maybe thank people who take time to support you. Even if it is their job and they’re paid to do so. When you order food at a restaurant, you do thank the waiting staff (I hope). I don’t see why it should be any different online: if someone helps you or answers your question, say thanks? Seems so simple.
Speaking of saying hi, maybe consider an alternative to "hi guys”, as not everyone identifies as a guy, especially at Sanity where the support team is quite gender-diverse. I’m not going to make a big deal of using “guys” generically, but please understand that even though intended as such, it is not a gender-neutral term. Alternatives: “everone”, “folks”, “friends” or just nothing.
This has to be the main bottleneck for people not getting the help they need: lack of information or clarity. Too often, we see messages like:
suddenly this morning i started to have this error in the sanity studio
Invalidreference filter, please check the custom "filter" option
This is a little thin. What are we talking about here? Where do you use the filter option in your schema? Can you share some code for that document type? What version of Sanity are you using, and have you done an update maybe?
Or:
I want to load stripe prices in the studio but getting
Module parse failed Unexpected token You may need an appropriate loader to handle this file type
Is there a way to fix this?
What does “loading stripe prices” mean? Connecting with a Stripe account? How? Are you following a tutorial of some sort? What have you tried so far? What are you attempting to build once connected with Stripe? Where does this error happen?
Or:
Unable to sign-in
Okay. Sign in where? Do you have more information? What have you tried? Are you trying to log in with email and password or an auth provider? What changed?
Or:
Hi, I'm New here, I'm having issues running Sanity init, keeps showing me errors.
What errors? Can you share a screenshot or at the very least the text message of the error you’re facing? Pretty hard to debug otherwise.
Or:
Hi, How can I add a variant to the schema product section with next.js? like color and sıze
Sanity is an unopinionated content platform. The concepts of products, sections or variants do not exist in Sanity. They only exist in your project because that’s what you’re building. So without significantly more information about your system, there is no way to help.
One way or another, these requests all lack critical information. Either about what is expected, or about what is happening, or more generally about the overall context of the situation.
A good way to ask for help is to follow this pattern: Goal → Context → Problem.
On a similar note, Julia Evans recently shared this zine on Twitter where she explains how she approaches debugging. She writes a message asking for help (akin to rubber-ducking):
Getting support online—be it via Stack Overflow, GitHub or Slack—can take time. Everyone has their own day and communication is very much asynchronous (yes, even on Slack). You’re not more important than anyone else, so be patient. Namely:
Long story short, just be patient. You might get an answer within the hour, or in a day or two. The best thing you can do to speed up the resolution time is to make your request airtight: be kind, give all the information you have in a digestible form, and hope for someone to pick it up soon.
In the meantime, take a break. Have a coffee. Take a nap. Walk the dog. Pet the cat. Have a cheese toastie. 🥪
A few more things, maybe a bit specific to Slack, but still worth keeping in mind when asking for support.
Picking the right channel is a great way to increase your chances of getting help. For instance, there is a #groq channel frequented by developers with a lot of love and expertise with the GROQ language. There is a #nextjs channel for problems that are specific to integrating Sanity with Next.js. Better using them when possible than just #help, which is a bit of a brawl at times.
The Sanity Slack has over 25,000 members and some channels (especially #help) are pretty active. Without threads, it would be a total mess. So try to explain your situation in one message so that people can use a single thread to discuss with you. This way there is an easy-to-follow conversation for anyone chiming in (or finding the thread).
Posting code is a good way to provide additional information about your problem. When sharing code snippets, be sure to format them as such (using the Slack editor or triple backtick fences) so it’s easier to read.
If you need to share a lot of code though, consider that a link to a GitHub repository or a CodeSandbox might be better than hundreds of lines over Slack! ;)
Once you’ve solved your problem (either alone or thanks to contributors), mark your thread with a checkbox emoji. This is super useful for Sanity support engineers or members like me to skim through channels looking for threads that haven’t been solved yet.
It’s not overly difficult at the end of the day: be kind and provide relevant information so people have a good overview of the context. Then be patient until someone picks it up. 😊
A good way to remember it: PEP (as in pep talk)! It stands for “Polite, Explicit and Patient.” 😉
]]>Sanity comes with a set of predefined roles for all plans. They are:
Now for sake of the argument, let’s imagine that our scheme has 2 different types: blogPost
and page
. We want editors to be able to handle blog posts themselves, but pages should be managed by administrators only.
To hide pages away from editors, we need to do a few things:
There are important caveats to take into consideration before implementing this solution:
That being said, if you just want to make sure your editors don’t modify the wrong thing by mistake, this is a good enough solution that is quite simple to implement.
It’s not overly advertised, but you can retrieve information about the current Sanity user by importing the dedicated store from part:@sanity/base/user
. It provides an asynchronous function getCurrentUser()
to get information, as well as an observable we can subscribe to. The object looks like this:
{
"email": "email@domain.com",
"id": "cc4zMBdMk",
"name": "Kitty",
"profileImage": "https://avatars.githubusercontent.com/u/3948238942?v=4",
"provider": "github",
"roles": [
{
"name": "administrator",
"title": "Administrator",
"description": "Read and write access to all datasets, with full access to all project settings."
}
]
}
The idea is that upon mounting the studio, we can get information about the current user and store it in a local variable or global object. Let’s create a access.js
file and put the following code into it:
// access.js
import userStore from 'part:@sanity/base/user'
export const EDITOR_TYPES = ['blogPost']
export const getCurrentUser = () => {
userStore.me.subscribe(user => {
window._sanityUser = user || undefined
})
}
export const isAdmin = (user = window._sanityUser) =>
user?.roles.map(role => role.name).includes('administrator')
export const isNotAdmin = user => !isAdmin(user)
Now, we can use our isAdmin
and isNotAdmin
utility functions to create logic based on the user’s role.
Sanity comes with its own structure builder, an engine to customize how the CMS menus and panels behave.
{
"name": "part:@sanity/desk-tool/structure",
"path": "./deskStructure.js"
}
The default structure looks something like this:
// deskStructure.js
export default () => S.list().title('Content').items(S.documentTypeListItems())
Let’s rework it a bit. For each document type, we want to render it if the current user is an administrator, or if the type is eligible for an editor. We can also update the title of the list based on the role if we want.
// deskStructure.js
import { getCurrentUser, isAdmin, EDITOR_TYPES } from './access'
// Call our function to retrieve the current user first.
getCurrentUser()
export default () => {
const admin = isAdmin()
return S.list()
.title(admin ? 'Content' : 'Editorial content')
.items(
S.documentTypeListItems().filter(
item => admin || EDITOR_TYPES.includes(item.getId())
)
)
}
You can find a similar example in the Sanity documentation, making use of more roles (both default and custom).
Updating the “Create new document” dialog is essentially the same thing (although difficult to find in the docs). We need to implement the new-document-structure
part first:
{
"name": "part:@sanity/base/new-document-structure",
"path": "./newDocumentStructure.js"
}
And then rework it like this:
// newDocumentStructure.js
import S from '@sanity/base/structure-builder'
import { isAdmin, EDITOR_TYPES } from './access'
export default () => {
const admin = isAdmin()
return S.defaultInitialValueTemplateItems().filter(
item => admin || EDITOR_TYPES.includes(item.getId())
)
}
Unfortunately, the studio search cannot really be customized. There is an experimental feature to give more weight to certain results but there is no way to properly ignore some documents from the search.
Not all hope is lost though! There is a recent open pull-request to implement that very feature, so hopefully it will get merged soon.
Once it’s done, we can programmatically iterate over the documents of our schema to add this ignore flag for the admin-only types for editor users. It will look something like:
// schema.js
import createSchema from 'part:@sanity/base/schema-creator'
import schemaTypes from 'all:part:@sanity/base/schema-type'
import blogPost from './blogPost'
import page from './page'
import { isNotAdmin, EDITOR_TYPES } from './access'
export default createSchema({
name: 'default',
types: schemaTypes.concat(
Object.entries({ blogPost, page }).map(([type, document]) => ({
...document,
// As of writing, this is not yet a production feature. This is still in
// development and might not ever reach production.
// See: https://github.com/sanity-io/sanity/pull/3253
__experimental_search_ignore:
isNotAdmin() && !EDITOR_TYPES.includes(type),
}))
),
})
The readOnly
and hidden
properties that can be defined on fields accept a function that receives—among other things—the current user. This means it is possible to mark a certain field readonly, or fully hidden, for editors if we want to (as also demonstrated in the documentation).
To make sure fields cannot be updated by editors even if they managed to reach a document they’re not supposed to see (which could happen when following a reference or reaching a document via the search), we can automate it. When defining our schema, we iterate over all fields of all documents, and add a readOnly
property based on the role.
// schema.js
import createSchema from 'part:@sanity/base/schema-creator'
import schemaTypes from 'all:part:@sanity/base/schema-type'
import blogPost from './blogPost'
import page from './page'
import { isAdmin, EDITOR_TYPES } from './access'
export default createSchema({
name: 'default',
types: schemaTypes.concat(
Object.entries({ blogPost, page }).map(([type, document]) => ({
...document,
fields: document.fields.map(addReadOnly(type)),
}))
),
})
function addReadOnly(type) {
return function (field) {
// Block types do not support the `readOnly` property, so we can skip.
if (field.type === 'block') return field
// If the `readOnly` property is not already defined and the type is for
// admins only, we add the `readOnly` property to restrict it for editors.
if (typeof field.readOnly === 'undefined' && !EDITOR_TYPES.includes(type)) {
field.readOnly = ({ currentUser }) => !isAdmin(currentUser)
}
// If the fiels is an array, recursively add the `readOnly` property to
// nested fields.
if (typeof field.of !== 'undefined') {
field.of.forEach(addReadOnly(type))
}
return field
}
}
For the same reason we should prevent editors from updating page fields, we should also prevent them from performing actions on page documents. We can do that by customizing document actions:
{
"implements": "part:@sanity/base/document-actions/resolver",
"path": "./resolveDocumentActions.js"
}
Then we can write a bit of logic to discard all actions on page documents if the user is not an admin:
// resolveDocumentActions.js
import defaultResolve from 'part:@sanity/base/document-actions'
import { isNotAdmin, EDITOR_TYPES } from './access'
export default function resolveDocumentActions(props) {
return isAdmin() || EDITOR_TYPES.includes(props.type)
? defaultResolve(props)
: []
}
Sanity doesn’t make it overly straightforward to manage the tools that appear at the top of the page in the upper menu (like the Desk tool, the media library or the Groq Vision plugin).
If we wanted to hide away all tools but the Desk to editors, we would have to do that in our user observer (knowing that the desk is always the first one):
import tools from 'all:part:@sanity/base/tool'
import userStore from 'part:@sanity/base/user'
const getCurrentUser = () => {
userStore.me.subscribe(user => {
window._sanityUser = user || undefined
if (!isAdmin(user)) tools.splice(1)
})
}
A more thorough check could be done if we wanted to only allow some tools for editors instead of removing them all. Once again though, they could still access these tools by reaching the URL directly, so this is just obfuscation.
]]>Unfortunately, if your project uses webpack aliases or Next.js module path aliases to avoid the complexity of relative paths, you might be facing this error:
Error in ./schema.js
Module not found: Error: Can't resolve '@/constants/something' in '/Users/kitty/Sites/my-project/sanity'
Thankfully, Sanity also uses Webpack as a bundler, so you can fix that by creating a webpack.sanity.js
file in the root of your Sanity project. Populate the file with the following code (you might need to tweak the aliases or paths to suit your project):
const path = require('path')
module.exports = function (config) {
config.resolve.alias['@'] = path.resolve(__dirname, '..', 'src')
return config
}
I must warn you that this is an undocumented feature on purpose, and Sanity cautions against extending the Webpack configuration:
NOTE: We do NOT encourage or suggest you extend the Sanity webpack config.
It's very easy to break existing functionality like hot module reloading, production build hashing, css module configuration, part resolution and so on.
We're working towards making the bundling of Sanity studios more configurable, but we're not quite there yet. Treat this as a last resort, and if you do choose to go this route, remember that Sanity uses Webpack ^3.8, so loaders, plugins and such needs to be compatible with this version.
I have reasons to believe this will not work in the future so we will eventually have to find another solution. In the meantime, this makes it possible to share code between a Next.js or Webpack-bundled project and Sanity!
Thankfully, Sanity uses Vite as a bundler which is built on top of Rollup, which also supports aliases. However they need to be defined in a sanity.cli.js
file at the root of the Sanity project, and not in the sanity.config.js
file:
import path from 'path'
import { defineCliConfig } from 'sanity/cli'
export default defineCliConfig({
api: {},
vite: config => {
if (!config.resolve) config.resolve = {}
if (!config.resolve.alias) config.resolve.alias = {}
config.resolve.alias['@'] = path.resolve(__dirname, '..', 'src')
return config
},
})
]]>In this article, I want to walk through automating the creation of a table of contents for the headings contained in a portable text tree. The idea goes likethis:
Let’s start here, with the body
prop containing the portable text queried from Sanity:
const BlogPost = props => {
return <PortableText value={props.body} />
}
I’ll be using React in this article, but the core logic is framework-agnostic and applicable regardless of how you render your components.
The first ting we need is a way to extract heading nodes from that data tree. To do so, we need a way to walk the tree, test every node, and collect the ones that match a function.
This is how we would create such a function:
Array.prototype.reduce
.const filter = (ast, match) =>
ast.reduce((acc, node) => {
if (match(node)) acc.push(node)
if (node.children) acc.push(...filter(node.children, match))
return acc
}, [])
Now, we can create a findHeadings
function that look for nodes with a style
prop like h2
, h3
…
const findHeadings = ast => filter(ast, node => /h\d/.test(node.style))
Note that style
has nothing to do with the style
HTML attribute. It’s a property called style
on Portable Text nodes which may contain things like normal
, h2
, h3
, etc.
Edit from October 1st, 2022: Simeon Griggs, from the Sanity team, came up with a clever way to retrieve headings directly in groq by leveraging new groq features. It avoids doing it in JavaScript like before, and could be faster for very large trees since groq is typically quite performant.
*[ _type == "article" ] {
body,
"headings": body[length(style) == 2 && string::startsWith(style, "h")]
}
Now, we want a function that nests these headings properly based on their level. This is surprisingly difficult to do, so I decided to rely on the code of outline-audit I wrote in 2016, which essentially does the same thing. Here is a compact version:
const get = (object, path) => path.reduce((prev, curr) => prev[curr], object)
const getObjectPath = path =>
path.length === 0
? path
: ['subheadings'].concat(path.join('.subheadings.').split('.'))
const parseOutline = ast => {
const outline = { subheadings: [] }
const headings = findHeadings(ast)
const path = []
let lastLevel = 0
headings.forEach(heading => {
const level = Number(heading.style.slice(1))
heading.subheadings = []
if (level < lastLevel) for (let i = lastLevel; i >= level; i--) path.pop()
else if (level === lastLevel) path.pop()
const prop = get(outline, getObjectPath(path))
prop.subheadings.push(heading)
path.push(prop.subheadings.length - 1)
lastLevel = level
})
return outline.subheadings
}
We now have an array of top-level headings, and each of these headings has its own subheadings in its subheadings
prop. Pretty neat! Here is an example:
We have everything we need to render our table of contents in the frontend!
const BlogPost = props => {
const outline = parseOutline(props.body)
return (
<>
<TableOfContents outline={outline} />
<PortableText value={props.body} />
</>
)
}
And finally, our TableOfContents
component:
const getChildrenText = props =>
props.children
.map(node => (typeof node === 'string' ? node : node.text || ''))
.join('')
const TableOfContents = props => (
<ol>
{props.outline.map(heading => (
<li>
<a href={'#' + heading._key}>{getChildrenText(heading)}</a>
{heading.subheadings.length > 0 && (
<TableOfContents outline={heading.subheadings} />
)}
</li>
))}
</ol>
)
A couple of things to note here:
getChildrenText
function.Right now, we are using the Sanity node key (the _key
property) as the ID for our headings. It’s okay, but it doesn’t make for great URLs (e.g. /your-path#b4282a9f0b2e
). It can also generate invalid IDs since keys can start with a number, which is not allowed in HTML.
We can tweak our findHeadings
function to provide more information for each node. Sanity uses speakingurl to generate slugs under-the-hood, so there are good chances it’s already in your bundle. We can use it to transform the heading text into a slug (e.g. “Customizing anchors” would become “customizing-anchors”).
const findHeadings = ast =>
filter(ast, node => /h\d/.test(node.style)).map(node => {
const text = getChildrenText(node)
const slug = speakingurl(text)
return { ...node, text, slug }
})
And we can update our component:
<a href={'#' + heading.slug}>{heading.text}</a>
That’s it folks! I hope it helps you generating table of contents for your portable text. Feel free to reach out on Twitter if you have any question!
]]>Depending on what happens in these functions, it might be worth considering setting up some rate-limiting so they do not get abused. The idea is that as someone is issuing more and more requests, the responses get slower until they eventually stop and return HTTP 429 Too Many Requests.
This is how we would use it:
export async function handler(request, response) {
try {
await applyRateLimit(request, response)
} catch {
return response.status(429).send('Too many requests')
}
// Rest of the API route code.
}
I personally like express-rate-limit and express-slow-down. Unfortunately, they’re Express middlewares, and Next.js isn’t making it too trivial to use Express/Connect middlewares in API routes. The Next.js documentation recommends (a flavor of) the following function to convert them:
const applyMiddleware = middleware => (request, response) =>
new Promise((resolve, reject) => {
middleware(request, response, result =>
result instanceof Error ? reject(result) : resolve(result)
)
})
Then, our applyRateLimit
function. It takes 2 middlewares (more on that in a second), runs them through the applyMiddleware
function to make them consumable outside of Express/Connect and then await them with the request and response. If a middleware rejects, applyRateLimit
rejects as well. If they all resolve, applyRateLimit
resolves successfully.
async function applyRateLimit(request, response) {
await Promise.all(
middlewares
.map(applyMiddleware)
.map(middleware => middleware(request, response))
)
}
Now, our middlewares
constant is an array made of our two middlewares: the one that causes slowness, and the one that eventually causes HTTP 429. The configuration values can (should) be tweaked based on the intended severity of the rate limit.
import rateLimit from 'express-rate-limit'
import slowDown from 'express-slow-down'
const getIP = request =>
request.ip ||
request.headers['x-forwarded-for'] ||
request.headers['x-real-ip'] ||
request.connection.remoteAddress
const limit = 10
const windowMs = 60 * 1_000
const delayAfter = Math.round(limit / 2)
const delayMs = 500
const middlewares = [
slowDown({ keyGenerator: getIP, windowMs, delayAfter, delayMs }),
rateLimit({ keyGenerator: getIP, windowMs, max: limit }),
]
Here, it says one can do 10 requests within a 60 seconds window before being blocked, and responses start being slowed down (by an additional 500ms each) after the 5th request within the window. In practice, it looks like this:
If we want to customize the configuration per API route, we can refactor our code to be wrapped in a function:
export const getRateLimitMiddlewares = ({
limit = 10,
windowMs = 60 * 1000,
delayAfter = Math.round(10 / 2),
delayMs = 500,
} = {}) => [
slowDown({ keyGenerator: getIP, windowMs, delayAfter, delayMs }),
rateLimit({ keyGenerator: getIP, windowMs, max: limit }),
]
And then we would use it like this:
const middlewares = getRateLimitMiddlewares({ limit: 50 }).map(applyMiddleware)
export default async function handler(request, response) {
try {
await Promise.all(
middlewares.map(middleware => middleware(request, response))
)
} catch {
return response.status(429).send('Too Many Requests')
}
// Rest of the API route code.
}
Note that is very important that the middlewares are defined outside of the API route handler, otherwise every incoming request creates a fresh new set of middlewares, which means the rate limit will never work.
import rateLimit from 'express-rate-limit'
import slowDown from 'express-slow-down'
const applyMiddleware = middleware => (request, response) =>
new Promise((resolve, reject) => {
middleware(request, response, result =>
result instanceof Error ? reject(result) : resolve(result)
)
})
const getIP = request =>
request.ip ||
request.headers['x-forwarded-for'] ||
request.headers['x-real-ip'] ||
request.connection.remoteAddress
export const getRateLimitMiddlewares = ({
limit = 10,
windowMs = 60 * 1000,
delayAfter = Math.round(10 / 2),
delayMs = 500,
} = {}) => [
slowDown({ keyGenerator: getIP, windowMs, delayAfter, delayMs }),
rateLimit({ keyGenerator: getIP, windowMs, max: limit }),
]
const middlewares = getRateLimitMiddlewares()
async function applyRateLimit(request, response) {
await Promise.all(
middlewares
.map(applyMiddleware)
.map(middleware => middleware(request, response))
)
}
export default applyRateLimit
I hope this helps you secure your Next.js applications! ✨
]]>From the discussion we had, here are the requirements I understood:
On to a table of contents:
If you just want the code, you can play with my original React implementation on CodeSandbox, or the plain HTML/CSS version on CodePen. The CSS code should be fully commented either way.
Cats. Let’s imagine we want to display a list of cats. Every cat card can be interacted with to open a page dedicated to that cat. Let’s see what the markup looks like.
<ul>
<li class="Card">
<img
class="Card-Image"
src="https://placekitten.com/200/200"
alt="Picture of Lilith"
/>
<div class="Card-Content">
<p class="Card-Title">
<a class="Card-Primary-Action" href="/cat/lilith">Lilith</a>
</p>
<p class="Card-Meta">10 year old British Shorthair</p>
</div>
</li>
<!-- More cards -->
</ul>
Allow me to point out that the link (it could also be a button if it performed an action instead of going somewhere) is placed on the primary piece of information only. It does not wrap the whole card.
The reason for it is that links can be listed by assistive technologies (such as VoiceOver’s rotor or a11y-outline), so we want to provide just enough information so that they’re understandable and identifiable on their own. We don’t want the entire card’s content to be read out when listing the links—it’s too much.
In that case, we want the link to be listed as “Lilith, link” not “Lilith, 10 year old British Shorthair, link”. And while the latter would still be acceptable, it quickly becomes problematic when cards hold more and more content (think product cards with a lot of meta data for instance).
Now, we want the whole card to be interactive, not just the main content. So we need to expand the hitbox with CSS. We can do that by using a pseudo-element which sits on top of the whole card. Skipping unrelated properties, it might look like this:
/**
* 1. Position context for the link’s pseudo-element.
*/
.Card {
position: relative; /* 1 */
}
/**
* 1. Use a pseudo-element to expand the hitbox of the link over
* the whole card.
* 2. Expand the hitbox over the whole card.
* 3. Place the pseudo-element on top of the whole card.
*/
.Card-Primary-Action::before {
content: ''; /* 1 */
position: absolute; /* 2 */
inset: 0; /* 2 */
z-index: 1; /* 3 */
}
This does the job. Now the whole card is clickable. To make it look as such though, we need to adjust the focus styles:
/**
* 1. Show that the card is interactive.
*/
.Card-Primary-Action::before {
cursor: pointer; /* 1 */
border: 2px solid transparent; /* 1 */
transition: border-color 200ms;
}
/**
* 1. Display interactivity on hover/focus by highlighting the border.
*/
.Card-Primary-Action:hover::before,
.Card-Primary-Action:focus::before {
border-color: hotpink; /* 1 */
}
/**
* 1. Hide the default focus outline as it’s recreated with a border.
*/
.Card-Primary-Action:focus {
outline: none; /* 1 */
}
Now what if the card contains links or buttons? This is where we’re happy not to have wrapped it all with an anchor or a button, since that would prevent us from adding interactive elements within it.
Ultimately, we can add other links and buttons at will. For instance, let’s say we want a button on the right side of the card to get some more details about the cat.
<li class="Card">
<img class="Card-Image" />
<div class="Card-Content">…</div>
<button class="Card-Secondary-Action">Details</button>
</li>
The only thing we need to do is bump its z-index
so it sits above the pseudo-element that covers the card.
/**
* 1. Place the secondary action on top of the card.
*/
.Card-Secondary-Action {
position: relative; /* 1 */
z-index: 2; /* 1 */
}
Adrian Roselli reached out to share his experience having tested this pattern with users. He found that having dead space around the button is important to avoid mis-taps. If possible, consider carving out some space for the additional control instead of placing on top of the card link. Read more information in his article about interactive cards.
Let me take this post as an opportunity to discuss whether a card should contain a heading.
I don’t think there is a right or a wrong answer per se. There might be cases where making the primary content of a card a link is worth it, and cases where it’s not. I guess it depends whether the heading a) introduces a significant amount of content, and b) benefits from being listed among all the headings of the page.
For what it’s worth, Heydon Pickering does use links in his article about cards. Looking back at the list of transactions we built for N26 back in the days, we certainly wouldn’t want dozens or hundreds of transactions to have their own heading—that would make the headings listing unusable.
Long story short: don’t wrap the whole card with a link, and instead link your main distinctive piece of information and expand its hitbox with CSS. This offers a better experience when listing links, and enable cards to contain other interactive elements.
I would also recommend reading Heydon’s piece on an inclusive card component as he covers all of this and more. I only remembered about his outstanding work when I was done with this post. Oh well. 😅
Hope this helps! ✨
See the Pen Accessible Cards by Kitty Giraudel (@KittyGiraudel) on CodePen.
]]>Disclaimer: I have no Computer Science degree. I have been doing frontend development for the last 10 years, a discipline you rarely need linked lists for. So take my suggestions here with a grain of salt.
A circular array is just that. A collection of items that loops on itself so that the last element connects to the first. This can be pretty handy in games and problems based on a circular structure (such as the Josephus problem or the popular mobile game Atomas). For instance:
(1) 2
6 3 → … 6 (1) 2 …
5 4
In this article, I’ll walk you through my implementation. If you just want to see the code, check the circularray repository on GitHub.
Our circular array is implemented as a linked list for convenience and performance. That means we don’t maintain an actual array under the hood, just a bunch of “nodes” connected to one another. Like in your typical double-ended queue, every node has a previous and a next node.
class Node {
constructor(value) {
this.value = value
this.next = this.prev = null
}
remove() {
this.prev.next = this.next
this.next.prev = this.prev
}
}
This remove
method will come in handy later. It gives the capacity for a node to remove itself from the list by connecting its two neighbors (and thus removing all references to itself).
Note that nodes are completely transparent to our usage. This is solely an internal data wrapper. We never actually manipulate the nodes manually when using our circular array.
Our array relies on a “pointer.” When adding items to our array, we’ll insert them before our pointer. We also need to maintain the amount of items manually, since we don’t actually use an array.
Our class will look like this (we’ll break every function down in further sections):
class CircularArray {
size = 0
pointer = null
constructor(values = []) {}
get length() {}
push(value) {}
unshift(value) {}
pop() {}
shift() {}
rotate(offset) {}
toArray() {}
}
Adding items to our circle means inserting a node to the left of (before) the pointer. For instance, consider a circle with number 1 to 9 and the pointer being on number 1, adding 10 would imply:
… 9 (1) 2 … → … 9 10 (1) 2 …
The first thing we need to do in our push
method is wrap our given value with a node, since anything in our list needs to be a node.
push (value) {
const node = new Node(value)
// … see below
}
We also need to increment the size of our array.
this.size++
If we don’t have a pointer yet (which happens when the list is empty), our node becomes the pointer. And because our array is a circular one, we mark the previous and next nodes of our only node as … itself. It’s kind of ouroboros, but the whole point is that our loop is always closed. The node is on both on the left and on the right of itself.
if (!this.pointer) {
node.next = node.prev = this.pointer = node
}
If we have a pointer though, we can deal with our main logic. We always want to insert items before our pointer.
else {
node.next = this.pointer // Mark as left of pointer
node.prev = this.pointer.prev // Mark as right of former last item
node.prev.next = node // Update former last item’s right
this.pointer.prev = node // Update pointer’s left
}
push
methodpush(value) {
const node = new Node(value)
this.size++
if (!this.pointer) {
node.next = node.prev = this.pointer = node
} else {
node.next = this.pointer
node.prev = this.pointer.prev
node.prev.next = node
this.pointer.prev = node
}
return this
}
To make the circular array instantiation a little more convenient, we can iterate over the values (or array) given to the constructor and push them one by one.
If we want to insert items at the “start” of our array, we can do the same exact same thing as we just did, and then move the pointer to the newly added item.
unshift(value) {
this.push(value)
this.pointer = this.pointer.prev
}
So if we were to push number 10 at the start, it would look like this:
… 9 (1) 2 … → … 9 (10) 1 2 …
Popping items means removing the item to the left of the pointer (the “last” item). On a circle with numbers from 1 to 9, dropping 9 would mean:
… 8 9 (1) 2 … → … 8 (1) 2 …
Here is how our pop
method would look like. First, we make sure there is an item in the list, otherwise we can return undefined
(like Array.prototype.pop
does).
pop() {
if (!this.pointer) return undefined
// … see below
}
Then, we store the value of our last node (the one before the pointer) that we’ll return.
const value = this.pointer.prev.value
We reduce our size by 1.
this.size--
Then depending on whether we’re removing the only item or not, we do one of two things. If the array is being emptied, we just clear the pointer. Otherwise, we remove the node before the pointer.
if (this.size === 0) {
this.pointer = null
} else {
this.pointer.prev.remove()
}
Finally, we return our value:
return value
pop
methodpop() {
if (!this.pointer) return undefined
const value = this.pointer.prev.value
this.size--
if (this.size === 0) {
this.pointer = null
} else {
this.pointer.prev.remove()
}
return value
}
The shift
method looks very similar so I’ll skip it for simplicity. An example would be:
… 9 (1) 2 3 … → … 9 (2) 3 …
What if we want to remove items that are not at the start or the end of the array? This is where rotation comes into play. Rotating our array means moving the pointer around so that when we add or remove items, we do that where we want.
Current state | Clockwise by 1 | Couter-clockwise by 1
… 9 (1) 2 … | … 8 (9) 1 … | … 1 (2) 3 …
Our rotation function takes an “offset”, which is the number of times we want to move the pointer. If it’s positive, we rotate the circle clockwise (so we move the pointer to the left). If it’s negative, we rotate the circle counter-clockwise (so we move the pointer to the right).
To avoid doing more rotations than we need to, we initially modulo the offset by the size. This way, if we try rotating a circle of 10 items 101 times, we end up rotating it once only.
rotate(offset) {
offset %= this.size
if (offset > 0) while (offset--) this.pointer = this.pointer.prev
else while (offset++) this.pointer = this.pointer.next
return this
}
Our circular array wouldn’t be as useful if we didn’t have a way to output it as a regular array. What it means for us is to disconnect our circle before the pointer, and stretch it as a line. Depending on whether we want to unroll our circle clockwise or counter-clockwise, we can pass a different argument.
toArray(direction = 'next') {
const items = []
if (!this.size) return items
let node = this.pointer
do {
items.push(node.value)
node = node[direction]
} while (!Object.is(node, this.pointer))
return items
}
Note that our prev
parameter is not the same as using .reverse()
. In our case, the pointer is always the first item, and then we enroll clockwise or counter-clockwise.
A perfect example of when a circular array is handy is for the Josephus problem.
To put it simply: counting begins at a specified point in the circle and proceeds around the circle in a specified direction (typically clockwise). After a specified number of items are skipped, the next item is removed. The procedure is repeated with the remaining items, starting with the next one, going in the same direction and skipping the same number of items, until only one item remains.
Considering we would skip one item out of 2, this is how it would be implemented. At every iteration, we rotate the circle clockwise by 1 and drop the first one, until we have only one item remaining.
const circle = new CircularArray([1, 2, 3, 4, 5, 6, 7, 8, 9])
while (circle.length > 1) circle.rotate(-1).shift()
console.log('Remaining item is', circle.pop()) // 3
That’s about it! It might not be much, but it was a lot of fun for me to learn about linked lists and try my hands on one. The code on GitHub contains a few more features that we haven’t covered today like a length
setting to truncate the array.
I hope you liked it!
]]>with
JavaScript statement for the very first time. Worth a few lines!
Day 8 of 2017 has a very straightforward problem statement. Given a set of instructions like the ones below, figure out the maximum value reached by any variable (called “registers”). Quoting directly from the manual:
Each instruction consists of several parts: the register to modify, whether to increase or decrease that register's value, the amount by which to increase or decrease it, and a condition. If the condition fails, skip the instruction without modifying the register. The registers all start at 0. The instructions look like this:
b inc 5 if a > 1
a inc 1 if b < 5
c dec -10 if a >= 1
c inc -20 if c == 10
As we can see, all lines are constructed the same way:
inc
or dec
to indicate whether to increment or decrement the register.if
keyword.<
, >
, <=
, >=
and ==
.I guess the safe and healthy way to approach this problem is to break down each line into its components as listed above, but this is Advent of Code and it’s the one time we don’t have to be safe and healthy… 😈
Inspecting my input (which is 1,000 expressions, not just 4), the thing that striked me is that it looks kinda like JavaScript. What if—and hear me out—we did the least amount of work to be able to just evaluate the lines as pieces of code?
It would look something like this:
lines.forEach(line => {
const [action, condition] = line.split(' if ')
eval(`if (${condition}) ${action}`)
})
This first tries to execute if (a > 1) b inc 5
, which is not valid JavaScript. We need to change these inc
and dec
for actual operators.
lines.forEach(line => {
const [action, condition] = line.split(' if ')
const operation = action.replace('inc', '+=').replace('dec', '-=')
eval(`if (${condition}) ${operation}`)
})
It now tries to execute if (a > 1) b += 5
at this stage, which is good! We unfortunately have a new error:
a
is not defined
Hard to argue with that—it is not defined. One way to solve the problem would be to manually define the variable a
(and all others) at the top of our function, but that’s a tad too cumbersome, especially when there are 1,000 instructions with many many different registers.
What if instead of using individual variables, we used an object with dynamic keys? So we would have a single registers
object, and then we would read and write keys in it.
const registers = {}
That’s getting us one step closer, but that’s still not enough because a
(and other variables) remains undefined. We could prefix variable names with registers.
in our expression. This way, we would run if (registers.a > 1) registers.b += 5
, which is what we want, but it’s still a little annoying having to do that.
with
statementEnters with
. If you’ve never heard of it, don’t worry because it’s a discouraged feature which happens to be forbidden in strict mode. 😅 What it does is “extending the scope chain for a statement.”
When doing b += 5
, JavaScript looks for the variable b
in the current scope (like the current block, or the condition, or the function) then goes up the scopes until reaching the global object, looking for the variable called b
. What with
does is inject the given object in the scope chain, so the lookup also happens there. MDN has a good snippet to illustrate how it works:
// From: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/with
let a, x, y
const r = 10
with (Math) {
a = PI * r * r
x = r * cos(PI)
y = r * sin(PI / 2)
}
In this case, using PI
, cos
and sin
—which would typically fail because there are no variables named as such—end up working. That’s because the Math
object was added to the lookup chain and therefore PI
, cos
and sin
were all found there.
You might see where we’re going with that. If we inject our registers
object to our evaluation context, variables like a
, b
and c
will be read in the registers
object.
const registers = {}
lines.forEach(line => {
const [action, condition] = line.split(' if ')
const operation = action.replace('inc', '+=').replace('dec', '-=')
with (registers) eval(`if (${condition}) ${operation}`)
})
Wait but, it still doesn’t work. Injecting registers
into the scope chain doesn’t do magic though, and a
, b
and c
are still not defined. And even if the interpreter didn’t crash on this, it would try to increment or decrement undefined
which would result in NaN
.
So we also need to initialize these values to 0. Are we back to square one? Not exactly. We could just capture everything that looks like a variable name in each line and instantiate them to 0 if they’re not already on the registers
object.
line.match(/\w+/g).forEach(variable => {
registers[variable] = registers[variable] || 0
})
Keen observers among you might have noticed that this will also capture inc
or dec
as well as if
to which I say: it doesn’t matter? But if we were precious about it, we could look in the condition and the operation exclusively instead:
;(condition + ' ' + operation).match(/\w+/g).forEach(variable => {
registers[variable] = registers[variable] || 0
})
Once the logical nullish assignment operator gets more widely adopted, we can do registers[variable] ??= 0
to define only if not yet present.
And we’re basically done. Now all at once for good measure:
const run = lines => {
const registers = {}
lines.forEach(line => {
const [action, condition] = line.split(' if ')
const operation = action.replace('inc', '+=').replace('dec', '-=')
line.match(/\w+/g).forEach(variable => {
registers[variable] = registers[variable] || 0
})
with (registers) eval(`if (${condition}) ${operation}`)
})
return Math.max(...Object.values(registers))
}
That’s it! 9 lines of JavaScript for the whole puzzle. Not bad I say.
]]>Learn what you want. There is enough to learn for a lifetime, so don’t feel like you have to follow the hype. Frameworks and libraries come and go, fundamentals remain, so I would recommend focusing on HTML, CSS and JavaScript if you get to choose (and do frontend).
Learn about accessibility, when you get a chance. At least the basics, like what it is, why it matters, how it works… It’s core of frontend development, and tend to be forgotten or included too late in the process. Get into it early if you can!
Don’t get too precious about your code. Trash it if needed. Start over. You’re more than your output. If someone has a better solution, consider it. Being too attached to code is just an impediment in my opinion. Code doesn’t have to be yours to be valuable.
Write code for humans. Write code comments. Self documenting code is a harmful myth, don’t fall for it. Slap that whole paragraph before that function to make it clearer to the next person (might be you!). Comment your code and encourage others to do so.
Use an incremental approach as much as possible. Don’t prematurely optimize. Things don’t start perfect. Write that crappy code. Copy paste some stuff. Make it work first. Once it works and you get it, improve it, clean it up, polish it. Or not, sometimes it’s also fine!
Set up Prettier wherever you can, no matter what configuration you choose. Life is too short to argue about coding style. Automate that stuff so your code reviews are actually interesting and valuable. Don’t turn people into human linters—it’s costly and ineffective.
It eventually becomes interesting to know a bit about setting up automated deployment or perhaps even testing. No rush, and no big deal if you don’t know much about it. But it did come in handy for me to learn about continuous integration & deployment.
If you have the occasion, dig into Cypress. It’s the best thing that happened to frontend testing in the last decade in my opinion. It’s an absolutely incredible tool that’s completely free and open-source which makes end-to-end testing reliable and (gasp) even enjoyable.
Spend time on docs. At work, for your side projects, for yourself. It’s severely underrated, which is a shame because it’s absolutely vital. I am convinced one of the reasons I’ve had some success as a dev is because I love writing. Documentation is worth time and effort.
Debugging is a skill to learn. It takes time, it takes practice. It starts by reading error messages, and learning what to Google. It gets easier with time, and is a useful (necessary?) skill to grow. Don’t get frustrated if you struggle, especially at the beginning.
Accept that you can’t understand everything. Heck, you don’t even need to understand everything! Go with the flow. Dig when you need to, otherwise just take things as presented, enough to do your work and move on. Knowledge grows with experience.
Admit when you don’t know stuff. A common mistake people do when getting seniority is to think they have to know everything not to appear weak or something (definitely done that as well). It’s good to admit not knowing. It shows humility and gives credibility.
Ask for help! Don’t get stuck needlessly. Give it a good try, do some research, and if you’re blocked, then ask around. Ask your peers. Ask Twitter. Ask your manager. It’s fine to ask for support, it’s a healthy attitude which pays off. Collaboration is often key.
When pairing, let the least experienced person (if there is one) drive. Otherwise they could get lost, or confused, or stressed out. Have that person set the pace and lead the session with questions and suggestions. Pairing is a collaboration, not a lecture.
In most cases, I’d recommend staying away from coding challenges if you’re involved in hiring devs. They are unhelpful, exclusive and stressful. Design a good conversational technical interview instead, which should yield same or most likely better results in my experience.
Surround yourself with people who care. Care about others, care about quality, care about safety. “Jerk geniuses” are typically way more jerks and way less geniuses than advertised. People who can collaborate, share and compromise are the people you want to work with.
Tech is not a zero sum game. You don’t have to diminish people’s success to feel good. Their achievements don’t negate yours. You get to be happy and proud of your peers for succeeding, even in places where you might have not. Celebrate their success with them!
Similarly, don’t compare yourself to others. At any point of your career. Some people will be better, faster, or just different. This is not a competition. Your career is your own, not theirs. You design it, shape it and pace it the way you want, that works for you.
A tough but important one: acknowledge your mistakes and shortcomings. It takes guts to admit you were wrong or messed up, but it’s important. It shows humility, it shows professionalism and it builds trust.
Learn to pick your battles. Especially when gaining more seniority, we tend to want to be involved in more topics to gain knowledge, experience and influence. But not everything is worth it. When in doubt, make it a group conversation, share your opinion and move on.
Faking confidence has helped me at times. Sometimes you have to shield less experienced/senior people from the stress or the uncertainty so they can safely focus on the task at hand. Tell them it’ll be fine, and take the tough part away from them.
Following up on this, if you have the authority or position to make someone’s life easier, use it. Lend them your authority if that makes sense. Help them settle conflictual situations. Save them some battles if you can, by just being there and supporting them.
Give constructive feedback when asked or possible, and welcome feedback given to you—be it technical or otherwise. It’s a good way to improve, to grow, and to be better towards others. Contribute to creating a culture of feedback in your organization/community.
If you notice someone being particularly good at something, tell them. Your acknowledgement might mean a lot to them, and be the push they need to keep doing that thing so well. Give praise when due, publicly ideally, otherwise at least in person.
If you can, especially if you’re in a privileged position, participate in shaping a heathy and inclusive environment. Use your position, your privileges, to make people safe and comfortable. Speak up. Amplify voice of your peers, especially from under-represented groups.
On a similar note, make sure everyone gets a chance to contribute and grow. Learn when to delegate and empower people to take on new responsibilities in a safe way. Build confidence and seniority by trusting them to take ownership of situations and discussions.
Being the “smartest” person in the room is not always desirable. If you happen to be, share your knowledge. Don’t be that person hoarding knowledge and responsibilities to be important. Making myself obsolete at N26 was the best move I’ve pulled in retrospect.
You don’t have to hustle all the time. Take on side projects if you want to, not because you feel like you have to. Resting is productive. Taking time to do something else is healthy. You’re more than your work.
Find yourself a hobby outside of coding maybe? Coding gets old eventually I think. Having another way to spend time that sparks joy is probably a good thing. I like writing, playing with my cats, and playing video games on my phone (or laptop sometimes).
Simple and strong one for last: be nice to people. We all try to do our job and go through the day. People are not the NPCs of your life. Be kind, don’t burn bridges. Unless they push you around, in which case tear them a new one.
Last but not least, kind thanks to Adiya Mohr for helping me come up with some items in this thread. Make this my last advice then: find yourself a friend like Adiya. ✨
]]>🦍 January 4th. I started a new role at Gorillas as an Engineering Lead, after 4+ years at N26.
📦 February 17th. I released version 6 of a11y-dialog without many user-facing changes. It was mainly moving the bundling stage to rollup to be a bit more flexible.
📦 March 23rd. I released version 7 of a11y-dialog, which was an interesting milestone which significantly reduced complexity. a11y-dialog has subsequently been recommended by Scott O’hara in his long-standing rebutal of the native <dialog>
element.
🏳️⚧️ May 23rd. I came out as being non-binary trans to my parents, something I was dreading for a long time.
🏳️🌈 June 30th. I spoke at an internal Gorillas event about what it is like to be non-binary in the workplace, and what people and organizations can do to support people like me.
👂 July 3rd. I got my ears pierced! I was a little nervous originally, but I’m super happy with it in the end. I get to wear cute earrings all the time!
📝 July 28th. I finally got to write for Smashing Magazine for the very first time. I authored a nice technical piece on building a dialog library from scratch. I’ve been reading Smashing Magazine for years, and am honored having finally been able to contribute to that incredible publication.
🪒 July 30th. I started facial hair removal therapy. As of writing, I have gone through 4 sessions already, and am looking for another 3 or 4 more to be completely finished. Having facial hair, even a little, has been a big source of dysphoria for me, so I’m super excited getting rid off it for good.
👩💻 August. I took on a position of Group Engineering Lead, which is significantly less hands on than I’ve been used to—something I’m not necessarily unhappy with. This gave me the opportunity to have a bigger impact on the Product & Engineering organization as a whole, which is something I’m proud of.
👀 September 13th. I got asked to be a reviewer for the Smart Interface Design Patters course by Vitaly Friedman, the founder of Smashing Magazine. It was a lot of fun to be able to go through the whole content and give suggestions and feedback!
💬 September 25th. I started getting some counceling in order to be able to discuss a variety of topics (difficult or not) with someone impartial and outside of them. It’s been helpful overall.
🔨 November 8th. I submitted my first pull-request to Next.js aiming at improving the vocal announcement of page changes. While this was pretty minor, it feels nice having been able to contribute to such a big open-source project.
🌱 November 15th. This day marked 3 years for me without eating meat and two years without fish.
🎂 December. I turned 30. On this occasion, I wrote a 30-tweets long thread of advice on frontend development, in case you are into that sort of thing.
🧩 December 17th. I unfortunately gave up on Advent of Code after a 16-days streak, being unable to solve the puzzle of the day. This killed my motivation to even attempt any day after that, so I’m leaving it at that.
👗 December 26th. I wore a dress and heels outside for the first time, a personal accomplishment I’m glad to be ending the year on!
Same as every year, I don’t set goals because I never follow through. The only thing I need to do is a full health checkup to make sure I’m all good. But that really is about it.
]]>I went through all the replies from the thread as well as some additional sources and tried to write a summary of the various points that surfaced. I thought I’d write a short post I can link to when answering: “is it fine to say ‘disabled’?”
Short answer: yes. Not only we can, but we also should. And we should stay away from ableist variants like “differently abled”, “invalid”, “handicapable” or “with special needs” (a mistake I have done in the recent past as well).
Long answer: Over the last few years/decades, there were a lot of discussions about whether or not it is okay to say that someone is “disabled.”
Some (typically non-disabled) people think of “disabled” as a pejorative, a somewhat dirty or shameful word which supposedly stigmatizes people with disabilities. The problem with that narrative is that it often centers non-disabled people’s feelings and their discomfort with disability. One should be able to acknowledge that an individual is unable to perform a certain action without thinking of it in a negative way. It’s not a slur, it’s not shameful, it doesn’t make people lesser.
Using the term “differently abled” is a condescending attempt at suggesting that disabled people do not in fact have disabilities but different abilities. That is not how things work. Unlike the typical representation in media, being disabled doesn’t come with compensatory skills and aptitudes. When someone is disabled, there are things they cannot or struggle to do, period.
Moreover, everyone has varying and different abilities but that doesn’t mean that everyone is disabled. Reducing disability to “varying abilities” erases a lot of disabled people’s experiences.
To summarize, the problem with shying away from the word “disabled” is twofold:
It undermines the severity of the disability, and how significantly it can impact people’s life. Blindness is not a “different ability.” It’s the absence of ability to see. It’s a disability, and implying otherwise is patronizing and disrespectful.
It risks slowing down support. Disability does not happen in a vacuum: it relates to the environment in which people live. The only way for society to provide support in regards to these difficulties is by acknowledging that some inviduals are disabled in the first place. As long as we will push the “differently abled” narrative (or other similar labels), accommodation and reform might be slow to come.
On that note, Christiane Link rightfully points out that people are disabled by the environment, by attitudes and structural discrimination. Therefore, disability isn’t so much about deficits. Disabled people often do things differently and are very much used to find solutions others never had to find. That’s actually a skill.
Ultimately, we shouldn’t try to sugar-coat or glorify disabilities. They exist, and we have to acknowledge them. By large, the disabled community is comfortable with that word and uses it to identify and describe themselves. If a disabled person tells you they prefer you use another qualificative when mentioning then, of course do so. Until then, using “disabled people” is a good and safe choice of words.
There is a lot of discussion between identity-first (e.g. “disabled person”) versus person-first (e.g. “person with a disability”) language, with no explicit consensus. The first promotes a social model of disability, while the latter reflects a more medical model of disability. Which to use might depend on context and interlocutor, and in doubt, it’s always better to ask. That being said, the existence of both approaches in no way invalidates everything we’ve seen before. Both are significantly better than ableist alternatives.
Kind thanks to Eric Eggert, Ioanna Talasli, Ariel Burone and Christiane Link for their insightful review.
]]>I originally wrote this piece for our documentation at Gorillas in order to advocate for a more inclusive workplace. After having mentioned it on Twitter and been requested it a few times, I decided to publish it. I hope this helps!
It helps trying to see gender as some sort of a spectrum rather than a binary thing. Most people live somewhere on the edges of that spectrum, as men and women, usually on the side that correlates with their assigned sex at birth. A lot of people however live somewhere on that spectrum. That’s gender identity, the personal sense of one’s own gender. Then, there is gender expression, which is how one decides to show their gender identity to the world (via mannerisms, interests, physical appearance…).
Gender and sex are somewhat related, but do not hold a one-to-one equivalency. Both words should not be used interchangeably. The right word should be picked depending on context, and most often it should be “gender”.
People who are not cis—that is, who do not identify with the gender they were assigned at birth—have existed for the longest time. It is not a recent invention or a product of our time. We have many historical evidences that trans people have been part of society, one way or another, for basically ever. It’s important to understand what it means not to perpetuate prejudices against this under-represented group of people.
Generally speaking, we tend to use non-binary as an umbrella term for anyone who’s not a woman or a man (regardless of whether they transitioned). It’s a pretty generic term. Some people use it but others might prefer something more specific to describe who they are. For instance, some people are agender (as in, they have no gender), other are gender-fluid, bigender, non-binary trans…
It might sound like a lot of jargon, and even a little silly sometimes, but we have to remember that words are what we need to make things real and concrete. The rise of such somewhat convoluted terminology is not an attempt to confuse or frustrate people but an effort to try to best understand one’s gender identity.
Let’s briefly talk about pronouns because pronouns are important. They are how many languages convey a sense of gender. English being a great language, it has a neutral pronoun they/them. It is encouraged to use it when referring to someone whose gender is unknown or undescribed.
Some people might prefer others using these pronouns when referring to them to avoid conveying one gender or the other—it’s neutral that way. Kindly respect that. It’s not a whim, it’s how to establish one’s sense of self and being respected for who one is. It matters, even when they’re not around to hear it.
Not all non-binary people use they/them pronouns. Not all men/women use he/she pronouns. It depends on one’s gender expression.
You can help trans and non-binary people by setting your pronouns on Slack (and other personal profiles) even if you are cis (as in, you use the pronouns going with the sex assigned at birth). This normalizes the respect of people’s pronouns and shifts the focus away from trans and non-binary people by avoiding them being the only ones talking about pronouns.
Generally speaking, the advice about gender is not to collect it if you don’t need it, and if you do need it, explicitly mention why, so people know how best to fill it. For instance, being non-binary in a country which does not recognize it as a legal gender identity can be challenging. So if you need the gender as defined on official documents, it’s good to mention that next to the field.
If you collect the gender for internal statistics, and gender gap/bias analysis, then it’s also good to mention it. In such case, you can be a bit more permissive with the options. Ideally, a free text option is best, but it’s harder to process, so we can provide multiple choices instead.
Minimum effort:
Something a little more fleshed out (and therefore more respectful) would be:
It is better to avoid the words custom, other or X, as all can feel a little alienating to trans people.
Another important thing to mention is the availability of such information. Some people might not be comfortable openly disclosing themselves as trans, yet they might want to be recognized as such in some contexts.
For that reason, it is always a good idea to specify who has access to that information in order to avoid inadvertently outing people. Having something as simple as “Your gender will be visible to all employees” or “Your gender will be restricted to People Operations team and will be treated as confidential information” is good, for example.
There is a lot of discussion about the best way to phrase “people who do not identify as cisgender men.” There are a couple of options which range from sub-optimal to downright terrible, and I thought it might be helpful to walk through some of them.
So where does that leave us? Ultimately, it depends a lot on what we’re trying to say of course. Yet, a pretty inclusive expression would be “under-represented genders” or “marginalized genders”, as it covers everyone who’s not a cis man. It doesn’t quite roll on the tongue, but at least it’s all encompassing and doesn’t exclusive or erase any group.
There is also the German acronym FINTA which stands for “Frauen, Intergeschlechtliche, Nicht-binäre, Trans und Agender” and translates into “Female, Intersex, Non-binary, Trans and Agender.” This is a pretty great and inclusive term, although it’s not exactly obvious from the get go (since it’s an acronym). Nevertheless, it is my preferred term in circles where it can be established and further used.
]]>There are plenty reasons why you might want to move to Next from a CRA app. It provides server-side rendering (SSR), and even incremental static regeneration (ISR) when hosted on Vercel. It’s an encompassing framework with built-in routing, image optimization, development environment, and more.
This post is a high-level walkthrough of things to deal with to finalized the migration from CRA to Next. Here’s what we’ll cover:
CRA uses an index.html
file in the public
folder to configure the HTML document surrounding the app. Next handles everything in React via the _document.js
file, so it needs to be moved manually. Fortunately, it’s relatively easy to do, and Next documentation provides some pointers.
For custom head management on a per-page basis, Next comes with its own solution, next/head, while CRA doesn’t. The usual suspects are react-helmet (or its clean version, react-helmet-async) or the more recent hoofd. Either way, I’d recommend abstracting usages of the library to certain components or hooks, so there is only one place to update when switching to Next.
For instance, instead of importing Head
from react-helmet in every page, import your own Head
component which wraps the react-helmet one. This way, you can update the implementation detail to make it work with Next without having to touch any other component.
CRA does not have built-in routing capability, so is often coupled with react-router-dom. You’d usually have a router component which declares all your routes, and for each route, which component to render. For instance:
import { BrowserRouter, Route, Switch } from 'react-router-dom'
import PostPage from '../PostPage'
import HomePage from '../HomePage'
const Router = () => (
<BrowserRouter>
<Switch>
<Route path='/'>
<HomePage />
</Route>
<Route path='/post/:slug'>
<PostPage />
</Route>
</Switch>
</BrowserRouter>
)
export default Router
Next comes with its own router. More than that: the routing is inferred by the pages
folder structure, so there is no router/routes declaration per se. To move that to Next, you would have to create pages/index.js
and pages/post/[slug]/index.js
, which would look like this:
// pages/index.js
import HomePage from '../components/HomePage'
export default HomePage
// pages/post/[slug]/index.js
import PostPage from '../../../components/PostPage'
export async function getStaticPaths() {
// If you can compute possible paths ahead of time, feel free to, but you
// shouldn’t need to do it to complete the migration to Next.
return { paths: [], fallback: true }
}
export async function getStaticProps(context) {
// If you want to resolve the whole post data from the slug at build time
// instead of runtime, feel free to, but you shouldn’t need to do it to
// complete the migration to Next.
return { props: { slug: context.params.slug } }
}
export default PostPage
Then in the PostPage
component, instead of reading the slug from the router with useRouteMatch
from react-router-dom, you’d expect it to come from the props. You could handle both ways like this:
const match = useRouteMatch()
const slug = props.slug || match.params.slug
Beyond the route definition itself, I think a healthy way to migrate that part is to abstract away anything about the router into components and hooks, so it’s just a matter of updating these parts when switching over to Next. For instance, have a link component which wraps Link
from react-router-dom, so it’s just matter of updating that component with next/link. Same thing for useRouter
and the like.
Note that Next gives some interesting pointers to migrate from react-router.
There again, Next has a solution for manual code-splitting, next/dynamic, while CRA doesn’t. The industry standard—as far as I can tell—is @loadable/component (also implied by Next docs). Both libraries work basically the same though, so the migration should be a few search-and-replace away:
- import loadable from '@loadable/component'
+ import dynamic from 'next/dynamic'
- const MyComponent = loadable(() => import('./MyComponent'))
+ const MyComponent = dynamic(() => import('./MyComponent'))
CRA has native support for plain CSS. That means you can import a CSS file inside a React component, and CRA will bundle CSS seamlessly. Unfortunately, Next does not beyond global stylesheets. The only place where Next allows importing stylesheets is in the _app.js
. So if your codebase uses CSS files all over, you’re in for a painful migration (which is basically what the docs say as well).
The easy-but-dirty way out is to import all your CSS files within _app.js
, but that kind of breaks separation of concerns since your components are no longer responsible for their own styles. If you end up deleting a component, you need to remember to delete its imported styles in _app.js
. Not great overall.
A better approach would be to do a proper migration. Fortunately, both systems support CSS modules, so one approach might be to manually convert every CSS file to a CSS module. Another approach would be to move the styling layer to a CSS-in-JS solution such as styled-components, Fela, or whatever floats your boat. Either way, that’s going to be a manual migration and a cumbersome one. By far the hardest part.
Because CRA doesn’t have server-side rendering (SSR) and only uses client-side rendering (CSR), it’s easy to have authored code that won’t work in Next (during pre-rendering). For instance, accessing browser APIs in the render (such as window
, localStorage
and the like), or initializing states with client-specific info instead of doing so on mount.
For this part, an intimate knowledge of the codebase will help making things SSR-friendly. It should be relatively easy to do, and a good test suite will help spot cases where Next fails to pre-render a page. A more brutalist approach is to run next build
and see where it fails.
Both Next and CRA come with integrated linting as part of the development environment and the build step. Unfortunately, the configuration is not quite the same. Fortunately, the CRA linting is a bit more strict than Next, so it shouldn’t be too difficult to migrate. I suspect the other way around to be more complex.
You might want to turn of the @next/next/no-img-element
rule though, because it expects every image to be authored with next/image
, which a) seems awfully dogmatic and b) is unrealistic for the migration.
{
"extends": "next",
"rules": {
"@next/next/no-img-element": "off"
}
}
One thing I realized only once I was done (insert sad face emoji) is that you could actually run both systems on the same codebase with minimal effort if you cannot one-shot your migration.
CRA uses a single entry point (usually src/index.js
), while Next relies on the pages
directory, so there is no conflict there. CRA will ignore pages
, and Next will ignore the entry file.
If you abstracted into hooks and components everything about routing and head management, you can use an environment variable within said components to use the right libraries. Small proof of concept (not tested, please tread carefully):
import NextLink from 'next/link'
import { Link as RRLink } from 'react-router-dom'
// See: https://nextjs.org/docs/basic-features/environment-variables
// See: https://create-react-app.dev/docs/adding-custom-environment-variables
const FRAMEWORK =
process.env.NEXT_PUBLIC_FRAMEWORK || process.env.REACT_APP_FRAMEWORK
const Link = props => {
return FRAMEWORK === 'next' ? (
<NextLink href={props.to} passHref>
<a>{props.children}</a>
</NextLink>
) : (
<RRLink to={props.to}>{props.children}</RRLink>
)
}
export default Link
This way, you can run REACT_APP_FRAMEWORK=cra react-scripts build
and deploy that in production while you slowly migrate your codebase to Next. And you can do staging/beta builds with NEXT_PUBLIC_FRAMEWORK=next next build
until you’re happy to put that live.
If you had a custom ESLint configuration for CRA, you might need to make the file a JavaScript file instead of JSON, and pass and use that environment variable to it as well so you can pick the right configuration.
I’m not going to lie: this will take time and effort and will not be painless. While both frameworks share a lot of similarities, they are also fundamentally different in the way they approach rendering (which is kind of the benefit of Next), so a lot of things will have to be updated. The most annoying part definitely is the CSS migration, if your CRA app uses plain CSS.
Be sure to read the Next migration from CRA guide as it provides a lot of helpful information on how to move from one system to the other that I haven’t covered in this article.
But with careful planning and incremental work while running both systems on the same code base (one for staging, one for production) until the migration is over, I’d say this is something that’s doable, especially for a team of people. And the results are rewarding, so that’s nice.
]]>Take a basic <details>
and <summary>
combo:
<details>
<summary>System Requirements</summary>
<p>
Requires a computer running an operating system. The computer must have some
memory and ideally some kind of long-term storage. An input device as well
as some form of output device is recommended.
</p>
</details>
Now, consider the following inocuous CSS:
html {
box-sizing: border-box;
}
* {
box-sizing: inherit;
}
While there is nothing too ground-breaking here, what’s interesting to note is that anything within the <details>
element will have a content-box
box sizing, and not a border-box
one.
Feel free to judge by yourself in this demo.
This is not specific to box-sizing
though. There is nothing special about this property that would cause this behaviour. In face, all enforced inheritance break down at the <details>
layer, as Šime Vidas pointed out on Twitter.
Amelia Bellamy-Royds was so kind as to explain why that is:
Because of the weird display model of
<details>
, it is implemented as a shadow DOM (with the summary slotted in first, and then the rest of the light DOM contents). Inherited properties will inherit through the composed tree including shadow elements, which you can’t style.CSS inheritance should follow [<details> → shadow root → <slot> → <summary>]. But
box-sizing
isn't normally inherited, and the* { box-sizing: inherit }
rule in the document won’t match either the shadow root node or the slot element.
Amelia then recommended enabling the “Show user agent shadow DOM” Chromium DevTools setting, which enhance the DOM representation with browser shadow DOM. Inspecting our demo, we can see something like this now:
<details>
#shadow-root (user-agent)
<slot name="user-agent-custom-assign-slot" id="details-summary">
<!-- ↪ <summary> reveal -->
</slot>
<slot name="user-agent-default-slot" id="details-content">
<!-- ↪ <p> reveal -->
</slot>
<summary>System Requirements</summary>
<p>
Requires a computer running an operating system. The computer must have some
memory and ideally some kind of long-term storage. An input device as well
as some form of output device is recommended.
</p>
</details>
As Amelia explains, the <summary>
is inserted in the first shadow root slot, while the rest of the content (called “light DOM”, or the <p>
tag in our case) is inserted in the second slot.
The thing is, none of these slots or the shadow root are matched by the universal selector *
, which only matches elements from the light DOM. Therefore, <summary>
properly inherits box-sizing
from its parent, but its inner shadow root does not, and neither do the inner slots, hence why the <summary>
and the <p>
elements don’t.
I played with some ideas to apply the box-sizing
rule to shadow roots as well, but I didn’t find anything too conclusive.
What’s particularly interesting is that things work as you’d expect on Firefox though. So either Firefox does not implement <details>
with Shadow DOM (which it doesn’t have to, as the implementation is not specified), or it does but it makes inheritance work as expected. There is an open whatwg/html issue about this.
I guess a simple fix is to apply box-sizing: border-box
to details > *
as well, or to apply box-sizing: border-box
to everything and bypass inheritance entirely.
That’s pretty daunting though. It can be tricky to know how to even get started with such a big task. Fear not! In this piece, we’re going to see how to get started with accessibility so it’s considered all throughout the development lifecycle of the project instead of at the end like usual.
I wrote about what accessibility is as the first piece of my A11y Advent calendar 2020 so I’ll keep things relatively concise here.
The general idea is to provide equal access to content to everyone, regardless of who they are or how they browse the web. So making an interface accessible is about considering anyone, independently of their abilities or disabilities, or the context in which they access content. Practically speaking, we can draw 5 large categories of impairments, each as broad as the next with lots of things to consider: visual, motor, cognitive, auditive and vocal.
So when (re)building that project, you’ll have to remember that the way you use it might very well not be the way many other people use it. Needless to say, it’s good to avoid making too many assumptions and to keep an open mind when building software.
If someone asks you for something more concrete, you can tell them that accessibility is audited through the Web Content Accessibility Guidelines (WCAG for short). The WCAG offer a dozen guidelines organized under the POUR principles, which stands for Perceivable, Operable, Understandable and Robust. Each guideline is testable through success criteria (a total of over 80 of these), each of them with 3 level of conformance: A, AA and AAA.
Being conformant with the guidelines mean passing success criteria at a conformance level of A. Further efforts can be made to achieve higher conformance levels (when available), but it does not have to be a goal per se. A lot of work can (and should) be put into accessibility well beyond sheer compliance with the WCAG.
Because accessibility testing cannot be fully automated (as we’ll see later), it is important to document expectations and implementations as you go, especially for more complex interface components like interactive widgets.
Authoring an accessibility handbook as a team is a great way to make sure everyone is involved in that initiative, and that the knowledge flows within the product team so it doesn’t fall onto someone’s shoulders to ensure everything’s accessible.
I wrote about how we made technical documentation a first-class citizen back at N26, which contains interesting tips to make sure the documentation remains relevant and up-to-date.
If your team is not very experienced with accessibility, it can be a good idea to create a small checklist people can refer to when they work on a feature. For instance, this is the checklist I authored for Gorillas:
Remember that these are merely suggestions and low-hanging fruits to ensure a certain level of accessibility. Yet, it is important to go further and ensure we not only are compliant with WCAG 2.1 conformance AA, but accommodate to everyone to the best we can.
There is only so much that can be automated. Auditing the HTML and CSS for potential pitfalls is a good place to start, but ultimately it is not going to replace some careful manual testing. Still, it can be a nice safety net, especially in environments where the HTML is abstracted away (such as with JavaScript frameworks).
I would recommend integrating axe which audits the DOM for potential accessibility issues. Its integration in the development environment should be pretty straightforward, and it gives helpful hints in the console while developing features.
If you’re setting up automated tests (and you probably should), you can even integrate axe with Cypress. In a React application, React Testing Library encourages an accessibility-oriented mindset when authoring unit tests.
You can find more information about accessibility testing and tooling in this entry from the A11y Advent Calendar.
Accessibility is an ongoing battle. It’s never done, and you and your team can never stop caring about it. To avoid doing the same improvements and fixes over and over, a component-based architecture is key.
The goal is going to create accessible components which can be reused across the application, ensuring that many expectations are matched from the get-go. The same way you’d have a centralized way to deal with, say, translations—you wouldn’t implement a translation pipeline in every component.
Try to rely on existing (lightweight and flexible) implementations of complex components. Interfaces like dialogs, footnotes, tabs, and advanced form controls can be very difficult to build properly, and it’s better to use battle-tested solutions rather than risking rolling out your own at the detriment of your users.
I expanded a bit on that topic in the A11y Advent Calendar, and you can also get started with that incredible list of accessible components on Smashing Magazine.
If you’re not sure how to implement something, you could ask on Twitter with the #a11y hashtag, or in the web-a11y Slack. It requires an invite (because Slack), but has thousands of members and is a vibrant community of accessibility enthusiasts and specialists alike, where you can get information and support.
Accessibility is a team game. It cannot be achieved let alone ensured by a single individual. It has to come from everyone, all the time. That’s why it’s very important to raise awareness within the organization about it so it becomes a normal topic to talk about.
I would go further and say that accessibility should be discussed (and possibly assessed) during the interview process, especially for engineers and designers, but also QA engineers and product owners. It’s everyone’s responsibility to make it happen, and too often that burden is put solely on developers.
That should get you started. Take the time to build things well from the ground up, it will pay off later. Treat accessibility as what it is, a significant part of the job, instead of a burden or someone’s else responsibility. Make your interface accessible, test, test, test, rinse and repeat. It doesn’t have to be perfect, it just have to be usable by everyone.
]]>Here are the things we want to achieve:
To solve our first point, we’re going to author 2 functions: getEntry
and getEntries
. The first one will always return a single entry, while the second one will always return an array of entries.
Our second point is going to be addressed by passing different arguments to our functions, all of which will be combined to construct a GROQ query which will eventually be forwarded to the Sanity client. Both functions have the exact same signature for convenience, which goes like this:
conditions
is a required array of individual conditions, which will be joined together with &&
. This is what lives between *[
and ]
at the beginning of our GROQ query.fields
is the core of the query. We use a string to preserve the power and flexibility of GROQ—no need to try to serialise this madness.params
is an optional object of arguments referenced in the conditions.options
is an optional object of options such as order, limit and preview.const getEntry = ({ conditions, fields, params, options }) => {}
const getEntries = ({ conditions, fields, params, options }) => {}
const client = { entry: getEntry, entries: getEntries }
Let’s see what it would look like in practice with a small example:
const page = await client.entry({
conditions: ['_type == "page"', 'slug.current == $slug'],
params: { slug: 'my-page-slug' },
fields: `_id, title, "content": body`
options: { isPreview: true }
})
Finally our third and final point, returning draft content in preview mode, will be addressed separately further down that blog post.
First, let’s write a small utility to take all our arguments and create a valid GROQ query from it. Let’s call it createQuery
. It’s going to receive the array of conditions, the string of fields, and the options and put them all together to return a query.
export const createQuery = ({ conditions, fields = '...', options = {} }) => {
const slice = typeof options.slice !== 'undefined' ? `[${options.slice}]` : ''
const order = options.order ? `| order(${options.order})` : ''
return `*[${conditions.join(' && ')}] { ${fields} } ${order} ${slice}`
}
Note that we use a type check for options.slice
instead of just checking if it’s truthy to make it possible to pass 0
if necessary (which is a falsy value but should still be printed out as a slice).
Now that we can create a GROQ query, we can use it in our helpers.
const getEntry = ({ conditions, fields, params, options = {} }) => {
const query = createQuery({
conditions,
fields,
options: { ...options, slice: 0 },
})
return client.fetch(query, params)
}
const getEntries = ({ conditions, fields, params, options = {} }) => {
const query = createQuery({ conditions, fields, options })
return client.fetch(query, params)
}
There is admittedly not too much going on for now. The interesting part is going to deal with draft content and that’s the topic of our next section.
Sanity handles draft content by cloning the entry and prefixing its unique ID with the drafts.
prefix. From the Sanity documentation:
Drafts are saved in a document with an id beginning with the path
drafts.
. When you publish a document it is copied from the draft into a document without thedrafts.
-prefix (e.g.drafts.ca307fc7-4413-42dc-8e38-2ee09ab6fb3d
vsca307fc7-4413-42dc-8e38-2ee09ab6fb3d
). When you keep working a new draft is created and kept read protected in the drafts document until you publish again.
What that means for our client is that we want to give precedence to draft content when the isPreview
option is passed. When querying a single entry, we should return the draft version if there is one. And when querying a collection, we should preserve the drafts over the published counter-parts. Consider the following list:
5a3b2389-36ce-4997-a93e-2419479d372d
ac547938-3732-4063-aeec-e41e3376d1f3
091b1dda-81dc-45b7-97f4-61b8fc50a3c1
drafts.091b1dda-81dc-45b7-97f4-61b8fc50a3c1
If the preview option is passed, we want to return the following entries:
5a3b2389-36ce-4997-a93e-2419479d372d
ac547938-3732-4063-aeec-e41e3376d1f3
// This entry is *not* returned because it has a draft counter-part (L5).
// 091b1dda-81dc-45b7-97f4-61b8fc50a3c1
drafts.091b1dda-81dc-45b7-97f4-61b8fc50a3c1
If the preview option is not passed, we want to return the following entries:
5a3b2389-36ce-4997-a93e-2419479d372d
ac547938-3732-4063-aeec-e41e3376d1f3
091b1dda-81dc-45b7-97f4-61b8fc50a3c1
// This entry is *not* returned because it is a draft.
// drafts.091b1dda-81dc-45b7-97f4-61b8fc50a3c1
Returning only published content is very easy thanks to the fact that Sanity does not return draft entries if the useCdn
option is passed to the client. So the first thing we can do is define 2 different Sanity clients, one for when the preview is enabled and one for when it’s not.
const client = sanityClient({
projectId: PROJECT_ID,
dataset: DATASET,
useCdn: true,
apiVersion: API_VERSION,
})
const previewClient = sanityClient({
projectId: PROJECT_ID,
dataset: DATASET,
useCdn: false,
token: TOKEN,
apiVersion: API_VERSION,
})
The first thing we have to do is pick the correct client based on the preview mode. If we’re not in preview mode, then things are easy since the production client uses the Sanity CDN which doesn’t return drafts. If the preview mode is enabled though, we need to figure out which entries to keep.
Let’s start with the getEntry
function. When querying the preview client, we do not limit the amount of results to 1. Then, we try to find a draft entry first, and if we haven’t, we return the published entry.
const isDraftEntry = entry => entry._id.startsWith('drafts.')
const isPublishedEntry = entry => !entry._id.startsWith('drafts.')
const getEntry = async ({ conditions, fields, params, options = {} }) => {
const slice = options.isPreview ? options.slice : 0
const query = createQuery({
conditions,
fields,
options: { ...options, slice },
})
if (options.isPreview) {
const entries = await previewClient.fetch(query, params)
return entries.find(isDraftEntry) || entries.find(isPublishedEntry)
}
return client.fetch(query, params)
}
The getEntries
function is a little more complex. We need to preserve drafts over published entries as explained at the beginning of this section.
const getEntries = async ({ conditions, fields, params, options = {} }) => {
const query = createQuery({ conditions, fields, options })
const sanityClient = options.isPreview ? previewClient : client
const entries = await sanityClient.fetch(query, params)
return options.isPreview ? entries.filter(preserveDrafts) : entries
}
And our preserve drafts function (annotated with comments):
const isNotSelf = entry => item => item._id !== entry._id
const findSameEntry = (current, array) => {
const otherEntries = array.filter(isNotSelf(current))
const isDraft = isDraftEntry(current)
const isSameEntry = entry =>
// If the current entry is a draft, a duplicate would be a published version
// with the same ID but without the `drafts.` part. If the current entry is
// a published version, a duplicate would be a draft version with the same
// ID starting with the `drafts.` part.
isDraft ? current._id.endsWith(entry._id) : entry._id.endsWith(current._id)
return otherEntries.find(isSameEntry)
}
// Try to find the current entry in the array with a different publication
// status (draft if it’s published, or published if it’s draft). If the same
// entry has been found in the array but with a different publication status,
// it means it is both published and drafted. In that case, we should only
// preserve the draft version (most recent).
const preserveDrafts = (current, _, array) =>
findSameEntry(current, array) ? isDraftEntry(current) : true
Note that this all requires querying the documents’ _id
as part of the fields when the preview mode is enabled, since the filtering is done by reading the _id
. To make sure this is the case, one could add a little check in the createQuery
function to ensure it’s part of the fields.
That’s it! It’s not the most intuitive, but it works like a charm. When the preview mode is enabled, draft content will be returned and draft entries will take precedence over their published counterparts, which is what we want.
From there, both helpers could also be improved with development logs for debugging purposes, tracking and whatnot. It’s pretty convenient since they centralize the logic to query data, which means it’s a great place to put this sort of things.
I hope this helps! ✨
]]>import myHelper from '~/helpers/myHelper'
Sometimes, another prefix is used in place of ~/
such as @
sign or something more fancy. We used a tilde because it kind of means “home” or “root”, so it felt nice.
In our case, we have some Node scripts we use in our continuous deployment pipelines, which need to import modules from the project. Things like helpers or constants. The problem is that these aliased paths only really work because Webpack processes them. When going through Node only, they don’t exist and the whole thing crashes.
internal/modules/cjs/loader.js:883
throw err;
^
Error: Cannot find module '~/helpers/myHelper'
Require stack:
I played with a few solutions until I stumbled upon the module-alias npm package, which essentially monkey-patches the Node resolution algorithm to make it understand whatever you like.
So at the top of our scripts (in the bin
folder in our case), we add the following line:
require('module-alias').addAlias('~', __dirname + '/../src')
const myHelper = require('~/helpers/myHelper').default
Job done, now our Node scripts can safely import files from our applications despite them being riddled with aliased paths.
Well, almost. There is still the problem that if you application uses import
and export
instead of require
(which is quite likely when using Webpack), our script might be able to find our application files, but it’s going to chalk in any import statement within them.
import anotherHelper from '~/helpers/anotherHelper'
^^^^^^
SyntaxError: Cannot use import statement outside a module
To make our Node scripts understand and process import
and export
statements, we use the esm module loader. It’s pretty transparent too: instead of doing node script.js
we do node -r esm script.js
.
That’s about it. No more duplicating code within our Node scripts just because they cannot import files from the main application. ✨
]]>My coworker Anita Singh and I have been working on some Slack guidelines for Gorillas in order to encourage mindful behaviour and being respectful of everyone’s time in written communication.
A lot of you seemed interested in us publishing them, so here they are. They are not a verbatim copy of our company guidelines since ours are quite Gorillas-specific, and they should be generic enough to be taken in most organizations.
In many organizations, Slack is the primary communication tool, even supplanting email. And in this day and age of global pandemic and ever-growing remote working culture, Slack is often the only way we have to know the people we work with.
This is why it is important to fill your Slack profile.
Slack has some official guidelines for creating channels, which outline helpful suggestions around naming conventions and organization structure.
Before creating a channel however, ask yourself whether you need a new channel at all. While channels are cheap, they can also cause Slack fatigue where there are just too many of them, and keeping track of things becomes cumbersome. It can also lead to people constantly redirecting one another to a more appropriate channel, creating friction.
In tech organization, we tend to recommend avoid discipline-specific channels (such as #consumer-facing-ios, #consumer-facing-android, #consumer-facing-web…) and instead have one channel with all the relevant stakeholders. This encourages cross-discipline collaboration and lowers the risk of tribalism and discipline-centric attitude.
Once having created a new channel, set its description for sake of clarity, and open with an explanatory message about the purpose of the channel, and why/when people should use it.
Unless there will be ongoing discussions about a unique topic, it might be preferable to use direct messages or group messages.
Generally speaking, default to public when creating a channel. Private channels cannot be turned into public channels down the line, which means information within them will remain forever restricted—this should be intended for specific purposes only.
There certainly are cases where private channels should be used, such as discussing confidential topics (legal, leadership…), but most conversations should hopefully be quite open. This encourages transparency and reduces duplicated communication and information loss.
Picking the right channel, especially where there are so many of them, can be tricky. It might be interesting to have a company-available document with some high-level overview of the Slack organization. For instance, every department or long-lasting team may have their own public channel for people to report issues or suggestions.
If there is a channel with stakeholders regarding a topic, it is best to post there so that everyone is in the loop and has access to the information. Reserve direct- or group-messaging when the conversation is short-lived and only pertains you and the person(s) you are messaging.
If you are posting something that will spark a discussion or replying to someone’s message, please start a thread or respond in a thread. This will minimize noise in channels and let people find content more easily.
As a rule of thumb, refrain from using @channel
, unless absolutely required (like an urgent announcement for everyone). If getting people’s immediate attention is necessary, prefer using @here
since it will only notify people currently online, which is often enough.
❌ @channel Who can help me with setting up this tool for this afternoon?
✅ @here Who can help me with setting up this tool for this afternoon?
Even with @here
, please be considerate and remember that it will notify everyone on the spot, which might disturb people, especially people with ADHD (Attention Deficit Hyperactivity Disorder).
❌ @here I started drafting this document for us to keep track of things.
Check it out when you have a chance.
✅ Hey team, I started drafting this document for us to keep track of things.
Check it out when you have a chance.
Additionally, when referring to someone in a message, do not mention them unless they need to be aware of your message. (Thank you to Pedro Duarte for the suggestion!)
❌ I discussed it with @Kitty and we think this is the proper way to go.
✅ I discussed it with Kitty and we think this is the proper way to go.
To help people figure out whether a conversation is relevant to them, it can be interesting to start messages (especially when long and requiring acknowledgement or action) with whom it is for, and what kind of urgency it is.
Similarly, the decision making process and important discussions happening over video calls should be summarized on Slack so everyone has access to the information.
Slack is a fantastic tool and the way we work with one another. Yet, it shouldn’t be the tool for everything, especially considering the other services and softwares we use.
Here is a non-exhaustive list of things not to use Slack for:
Personally-identifiable information (PII for short) is any sort of information about people (such as customers) which can be used to identify an individual personally (such as a name, an address, a phone number…). Due to GDPR regulation and a general desire to respect people’s privacy, we should refrain from sharing people’s information on Slack. This includes both within messages and in screenshots as well.
]]>For some reason, I was thinking about it the other day and was wondering how quickly I could recreate it almost a decade later, without reading the original article. Well, something like 10 minutes, and I managed to remove 3 HTML elements. 💪
See the Pen Single element CSS pie timer by Kitty Giraudel (@KittyGiraudel) on CodePen.
In my original approach, I needed a container and 3 child elements. In this version, I managed to sort it out with a single empty HTML element. I used a <div>
here.
<div class="pie"></div>
If you intend to use this as a loading state of some sort, please remember that it doesn’t convey any meaning to assistive technologies as is and should be accompanied with actual text content that can be read. Also SVG might be a better choice for such animation.
Here is how I thought about making it work: we use 2 pseudo-elements. One will serve as a mask, and the other will be the rotating one.
Let’s start with our container.
/**
* 1. Size the pie as a 1em-wide disc, and use `font-size` to scale it
* up or down. This is not the only way, and it could be sized manually
* if deemed preferable.
* 2. Give a position context for the absolutely-positioned pseudo-
* elements.
* 3. Give it a border so it can be visible despite being empty.
* 4. Originally used `color` to be able to use `currentcolor`, but
* Safari doesn’t like `currentcolor` in an animation. 🤯
*/
.pie {
font-size: 500%; /* 1 */
width: 1em; /* 1 */
height: 1em; /* 1 */
border-radius: 50%; /* 1 */
position: relative; /* 2 */
border: 0.05em solid var(--color); /* 3 */
--color: deeppink; /* 4 */
}
Then, the base styles for our pseudo-elements:
/**
* 1. Shape both pseudo-elements as half-circles. Hiding overflow on
* the container and skipping border-radius on the pseudo-elements
* unfortunately produces glitchy results in Safari.
* 2. Place them both on the left side of the pie.
* 3. Make them spin from the center right point, not the middle.
*/
.pie::before,
.pie::after {
content: '';
width: 50%; /* 1 */
height: 100%; /* 1 */
border-radius: 0.5em 0 0 0.5em; /* 1 */
position: absolute; /* 2 */
left: 0; /* 2 */
transform-origin: center right; /* 3 */
}
/**
* 1. Put the masking pseudo-element on top.
*/
.pie::before {
z-index: 1; /* 1 */
background-color: white; /* 1 */
}
/**
* 1. Give the spinning pseudo-element the pie color.
*/
.pie::after {
background-color: var(--color); /* 1 */
}
Finally, the animations:
/**
* 1. Shared animation properties for both pseudo-elements.
*/
.pie::before,
.pie::after {
animation-duration: 3s; /* 1 */
animation-iteration-count: infinite; /* 1 */
}
/**
* 1. We want the animation to have a single step halfway through.
*/
.pie::before {
animation-name: mask;
animation-timing-function: steps(1); /* 1 */
}
/**
* 1. Make sure the rotationg is linear for the effort to work.
*/
.pie::after {
animation-name: rotate;
animation-timing-function: linear; /* 1 */
}
@keyframes mask {
50%,
100% {
background-color: var(--color);
transform: rotate(0.5turn);
}
}
@keyframes rotate {
to {
transform: rotate(1turn);
}
}
That’s it! Definitely simpler than the original approach, with less HTML elements, less CSS, more flexibility and a cleaner output.
]]>For some reason, I recently came back to Sass Guidelines. Not to update the content, but to work on the site itself. It turns out I learnt a lot in the last few years and found many improvements worth doing. I thought it would be interesting to discuss them in this post. Here are the different topics we’ll go through:
I think what I like the most about Sass Guidelines as a project is how I got to collaborate with many people to have it translated in 13 different languages. On that note, Sass Guidelines are now available in Dutch thanks to Noah van der Veer!
If you are interested in translating Sass Guidelines in a language that is currently not supported, please feel free to get in touch on Twitter so we can discuss feasibility! We are also looking for people to update the Polish version (from v1.2) and the Czech and Danish versions (from v1.1).
On any version but the English one, there is an English banner mentioning that this is a translation and therefore might not be 100% accurate. It says (for instance, for the German version):
You are viewing the German translation from Moritz Kröger of the original Sass Guidelines from Kitty Giraudel.
This version is exclusively maintained by contributors without the review of the main author, therefore might not be completely authentic.
I noticed that this disclaimer was not marked as English, which meant someone using a screen-reader wouldn’t switch to English when reading out this content. Not great! I added lang="en"
to this container and initiated the process to have this content translated since there is no reason it should be displayed in English at all.
Something I learnt from working on the international site for Gorillas is that it can be interesting to list alternate versions in the <head>
of the document for search engines.
<link rel="alternate" href="https://sass-guidelin.es" hreflang="x-default" />
<link rel="alternate" href="https://sass-guidelin.es" hreflang="en" />
<link rel="alternate" href="https://sass-guidelin.es/cz" hreflang="cz" />
<link rel="alternate" href="https://sass-guidelin.es/da" hreflang="da" />
<link rel="alternate" href="https://sass-guidelin.es/de" hreflang="de" />
<link rel="alternate" href="https://sass-guidelin.es/el" hreflang="el" />
<link rel="alternate" href="https://sass-guidelin.es/es" hreflang="es" />
<link rel="alternate" href="https://sass-guidelin.es/fr" hreflang="fr" />
<link rel="alternate" href="https://sass-guidelin.es/it" hreflang="it" />
<link rel="alternate" href="https://sass-guidelin.es/ko" hreflang="ko" />
<link rel="alternate" href="https://sass-guidelin.es/nl" hreflang="nl" />
<link rel="alternate" href="https://sass-guidelin.es/pl" hreflang="pl" />
<link rel="alternate" href="https://sass-guidelin.es/pt" hreflang="pt" />
<link rel="alternate" href="https://sass-guidelin.es/ru" hreflang="ru" />
<link rel="alternate" href="https://sass-guidelin.es/zh" hreflang="zh" />
I therefore added a robots.txt and a sitemap.xml so search engines can properly browse and index the site and all its pages.
I’ve also fixed a lot of links yielding a 404 due to pages and sites having disappeared over the years. I don’t know how much this counts for SEO purposes, but that can’t hurt anyway, at least from the user experience standpoint.
Having spent the last few years focusing on accessibility, I must say I was almost pleased finding accessibility issues on Sass Guidelines as it means I’ve learnt and gotten better.
First of all, titles were a little all over the place.
<h1>
elements. I think this is a bit of a side-effect of the way the content lives in the codebase, every chapter being in its own Markdown file starting with a top-level title (e.g. # Title
).<h2>
element despite not being a title at all.<h6>
elements without the intermediary levels. This comes from the first version of the guidelines where we used that in some instances, when it shouldn’t have been titles at all.This has been all fixed, and the document outline should be clean and consistent now.
While icons were technically accessible to assistive technologies, I think (I must admit I cannot remember for sure) they caused double vocalisation of the content. I’ve also found an odd bug where they were incorrectly described.
Basically, they all had their own description (with role="img"
+ aria-labelledby="…"
), but since they are all used within a link/button alongside additional content, the description ended up being read out twice—one for the icon, and one for the text.
Because they are never used on their own and are always displayed alongside textual content (whether visible or not), they can in fact be safely ignored (with aria-hidden="true"
+ focusable="false"
).
For some reason, the highlight
block implementation from Jekyll (the static-site generator Sass Guidelines is built on) uses the <figure>
element for code blocks. That’s definitely a questionable choice, so I moved all code blocks to Markdown fenced blocks (wrapped with triple backticks on both sides) so it no longer uses that HTML element.
At the bottom of the page, there is a recap of all the guidelines in the form of a few bullet-point lists. It’s a good way to have a digestible summary without having to go through all the content.
At the end of every item, there was an anchor link to go back to the relevant section of the document. Unfortunately, these links all had “↩” for content. That’s handy, but definitely not great for assistive technologies as we ended up with dozens of links indiscernable from one another for having all the same content. Since there was no obvious fix without involving all translators, I decided to remove these links.
For something as simple as Sass Guidelines, you’d think there are not many performance improvements that can be done. After all, it’s basically a very long HTML document. Still, I found quite a few cools things to do:
I removed the custom font entirely. We were using Roboto, and while it was responsibly loaded (asynchronously and only a subset, following the Filament Group’s recommendations), it also feels very unnecessary. The site doesn’t become suddenly better because of Roboto (or any font for that matter), so I decided to drop it entirely and use the default font stack instead.
Images also could use some love. First of all, I lazy-loaded them all with loading="lazy"
, which is pretty interesting to avoid downloading them as soon as the page loads and wait for them to be rendered instead. Secondly, I realised they were not served in optimised formats when available so I added WebP and AVIF support to significantly reduce their file size.
Not that it makes a huge difference performance-wise, but I removed CSS vendor prefixes. I was surprised to see that I used a lot of vendor prefixes like -webkit-
, -moz-
and -ms-
throughout the stylesheet, which is definitely no longer necessary for most declarations.
Finally, I removed Google Analytics. Mostly because I couldn’t care less about the stats, and also because my Netlify plan includes analytics done on the server-side, so it’s better for everyone.
Sass Guidelines is not a complex project, but there are still quite a lot going on all things considered. I didn’t feel like completely revamping it, but I did clean up a few things regarding tooling:
<meta>
tags for runtime usage in JavaScript, and moved them to <template>
as they should be.I think that’s about it! I find it interesting coming back to a project like this after a few years. I put a lot of work into Sass Guidelines back then (without even considering the time spent authoring the content) and it’s genuinely rewarding looking back and see how far I’ve come. ✨
]]>In this article, I will show a small HTML + CSS only implementation of an accessible toggle that you can basically copy in your own projects and tweak at your own convenience.
See the Pen xxgrPvg by Kitty Giraudel (@KittyGiraudel) on CodePen.
Disclaimer: Before using a toggle switch, consider whether this is the best user interface for the situation. Toggles can be visually confusing and in some cases, a button might be more suited.
As always, let’s start with the HTML. In this case, we are going to start with the very basics, which is a properly labelled checkbox. It’s an <input>
with a <label>
, with the correct attributes, and a visible label.
If the toggle causes an immediate action (such as switching a theme) and therefore relies on JavaScript, it should use a <button>
instead. Refer to the button variant for more information about the markup—the styles are essentially the same. Thanks to Adrian Roselli for pointing this out!
<label class="Toggle" for="toggle">
<input type="checkbox" name="toggle" id="toggle" class="Toggle__input" />
This is the label
</label>
It is worth mentioning that this is not the only way to mark up such interface component. For instance, it is possible to use 2 radio inputs instead. Sara Soueidan goes more in details about designing and building toggle switches.
Now, we are going to need a little more than this. To avoid conveying the status of the checkbox relying solely on color (WCAG Success Criteria 1.4.1 Use of Color), we are going to use a couple icons.
The way it’s going to work is we’re going to have a small container between the input and the text label which contains 2 icons: a checkmark and a cross (taken from Material UI icons). Then we’ll create the toggle handle with a pseudo-element to cover one of the icon at a time.
<label class="Toggle" for="toggle">
<input type="checkbox" name="toggle" id="toggle" class="Toggle__input" />
<span class="Toggle__display" hidden>
<svg
aria-hidden="true"
focusable="false"
class="Toggle__icon Toggle__icon--checkmark"
width="18"
height="14"
viewBox="0 0 18 14"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<path
d="M6.08471 10.6237L2.29164 6.83059L1 8.11313L6.08471 13.1978L17 2.28255L15.7175 1L6.08471 10.6237Z"
fill="currentcolor"
stroke="currentcolor"
/>
</svg>
<svg
aria-hidden="true"
focusable="false"
class="Toggle__icon Toggle__icon--cross"
width="13"
height="13"
viewBox="0 0 13 13"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<path
d="M11.167 0L6.5 4.667L1.833 0L0 1.833L4.667 6.5L0 11.167L1.833 13L6.5 8.333L11.167 13L13 11.167L8.333 6.5L13 1.833L11.167 0Z"
fill="currentcolor"
/>
</svg>
</span>
This is the label
</label>
A few things to note about our markup here:
aria-hidden="true"
on our SVGs, because they should not be discoverable by assistive technologies since they are strictly decorative.focusable="false"
on our SVGs as well to avoid an issue with Internet Explorer where SVGs are focusable by default.hidden
on the .Toggle__display
container to hide it when CSS is not available, since it should fall back to a basic checkbox. Its display value will be overriden in CSS.Before we get deep into styling, I would like to clarify the terminology, just so it’s easier to follow up:
<label>
that contains both the toggle and the text label (.Toggle
)..Toggle__display
)..Toggle__display::before
).<input>
which is visually hidden but remains accessible and focusable (.Toggle__input
).Let’s start with some basic styles for our container.
/**
* 1. Vertically center the toggle and the label. `flex` could be used if a
* block-level display is preferred.
* 2. Make sure the toggle remains clean and functional even if the label is
* too wide to fit on one line. Thanks @jouni_kantola for the heads up!
* 3. Grant a position context for the visually hidden and absolutely
* positioned input.
* 4. Provide spacing between the toggle and the text regardless of layout
* direction. If browser support is considered insufficient, use
* a right margin on `.Toggle__display` in LTR, and left margin in RTL.
* See: https://caniuse.com/flexbox-gap
*/
.Toggle {
display: inline-flex; /* 1 */
align-items: center; /* 1 */
flex-wrap: wrap; /* 2 */
position: relative; /* 3 */
gap: 1ch; /* 4 */
}
Then, our toggle. To make it easier to tweak its styles, we rely on some CSS custom properties for the offset around the handle, and the diameter of the handle itself.
/**
* 1. Vertically center the icons and space them evenly in the available
* horizontal space, essentially giving something like: [ ✔ ✗ ]
* 2. Size the display according to the size of the handle. `box-sizing`
* could use `border-box` but the border would have to be considered
* in the `width` computation. Either way works.
* 3. For the toggle to be visible in Windows High-Contrast Mode, we apply a
* thin semi-transparent (or fully transparent) border.
* Kind thanks to Adrian Roselli for the tip:
* https://twitter.com/aardrian/status/1379786724222631938?s=20
* 4. Grant a position context for the pseudo-element making the handle.
* 5. Give a pill-like shape with rounded corners, regardless of the size.
* 6. The default state is considered unchecked, hence why this pale red is
* used as a background color.
*/
.Toggle__display {
--offset: 0.25em;
--diameter: 1.8em;
display: inline-flex; /* 1 */
align-items: center; /* 1 */
justify-content: space-around; /* 1 */
width: calc(var(--diameter) * 2 + var(--offset) * 2); /* 2 */
height: calc(var(--diameter) + var(--offset) * 2); /* 2 */
box-sizing: content-box; /* 2 */
border: 0.1em solid rgb(0 0 0 / 0.2); /* 3 */
position: relative; /* 4 */
border-radius: 100vw; /* 5 */
background-color: #fbe4e2; /* 6 */
transition: 250ms;
cursor: pointer;
}
/**
* 1. Size the round handle according to the diameter custom property.
* 2. For the handle to be visible in Windows High-Contrast Mode, we apply a
* thin semi-transparent (or fully transparent) border.
* Kind thanks to Adrian Roselli for the tip:
* https://twitter.com/aardrian/status/1379786724222631938?s=20
* 3. Absolutely position the handle on top of the icons, vertically centered
* within the container and offset by the spacing amount on the left.
* 4. Give the handle a solid background to hide the icon underneath. This
* could be dark in a dark mode theme, as long as it’s solid.
*/
.Toggle__display::before {
content: '';
width: var(--diameter); /* 1 */
height: var(--diameter); /* 1 */
border-radius: 50%; /* 1 */
box-sizing: border-box; /* 2 */
border: 0.1 solid rgb(0 0 0 / 0.2); /* 2 */
position: absolute; /* 3 */
z-index: 2; /* 3 */
top: 50%; /* 3 */
left: var(--offset); /* 3 */
transform: translate(0, -50%); /* 3 */
background-color: #fff; /* 4 */
transition: inherit;
}
The transition here is so the handle gently slides from one side to the other. This might be a little distracting or unsettling for some people, so it’s advised to disable this transition when the reduced motion is enabled. This could be done with the following snippet:
@media (prefers-reduced-motion: reduce) {
.Toggle__display {
transition-duration: 0ms;
}
}
Let’s not forget to visually hide our actual checkbox, so it sits on top of our toggle and can be clicked, but isn’t actually visible.
.Toggle__input {
position: absolute;
opacity: 0;
width: 100%;
height: 100%;
}
The reason we inserted our toggle container after the input itself is so we can use the adjacent sibling combinator (+
) to style the toggle depending on the state of the input (checked, focused, disabled…).
First, let’s deal with focus styles. As long as they’re noticeable, they can be as custom as we want them to be. In order to be quite neutral, I decided to display the native focus outline around the toggle when the input is focused.
/**
* 1. When the input is focused, provide the display the default outline
* styles from the browser to mimic a native control. This can be
* customised to have a custom focus outline.
*/
.Toggle__input:focus + .Toggle__display {
outline: 1px dotted #212121; /* 1 */
outline: 1px auto -webkit-focus-ring-color; /* 1 */
}
One interesting thing I’ve noticed is that when clicking a native checkbox or its label, the focus outline does not appear. It only does so when focusing the checkbox with a keyboard. We can mimic this behaviour by removing the styles we just applied when the :focus-visible
selector doesn’t match.
/**
* 1. When the toggle is interacted with with a mouse click (and therefore
* the focus does not have to be ‘visible’ as per browsers heuristics),
* remove the focus outline. This is the native checkbox’s behaviour where
* the focus is not visible when clicking it.
*/
.Toggle__input:focus:not(:focus-visible) + .Toggle__display {
outline: 0; /* 1 */
}
Then, we have to deal with the checked state. There are 2 things we want to do in that case: update the toggle background color from red to green, and slide the handle to the right so it covers the cross and shows the checkmark (100% of its own width).
/**
* 1. When the input is checked, change the display background color to a
* pale green instead.
*/
.Toggle__input:checked + .Toggle__display {
background-color: #e3f5eb; /* 1 */
}
/**
* 1. When the input is checked, slide the handle to the right so it covers
* the cross icon instead of the checkmark one.
*/
.Toggle__input:checked + .Toggle__display::before {
transform: translate(100%, -50%); /* 1 */
}
Adrian Roselli rightfully pointed out that this design does not account for a possibly “mixed” (or “indeterminate” state). This is true for sake of simplicity since most checkboxes/toggles do not need such state, but should be considered when needed.
Finally, we can add some custom styles to make a disabled toggle a bit more explicit.
/**
* 1. When the input is disabled, tweak the toggle styles so it looks dimmed
* with less sharp colors, softer opacity and a relevant cursor.
*/
.Toggle__input:disabled + .Toggle__display {
opacity: 0.6; /* 1 */
filter: grayscale(40%); /* 1 */
cursor: not-allowed; /* 1 */
}
I originally forgot about right-to-left support and Adrian Roselli was kind enough to poke me so I update the code. Ideally, we would use the :dir()
pseudo-class unfortunately browser support is pretty abysmal as of writing so we have to rely on the [dir]
attribute selector instead.
We need to adjust everything that’s currently directional so the original position of the handle, and the checked position of the handle.
/**
* 1. Flip the original position of the unchecked toggle in RTL.
*/
[dir='rtl'] .Toggle__display::before {
left: auto; /* 1 */
right: var(--offset); /* 1 */
}
/**
* 1. Move the handle in the correct direction in RTL.
*/
[dir='rtl'] .Toggle__input:checked + .Toggle__display::before {
transform: translate(-100%, -50%); /* 1 */
}
Finally, we apply some styles to our icons, as recommended by Florens Verschelde in their fantastic guide on SVG icons:
.Toggle__icon {
display: inline-block;
width: 1em;
height: 1em;
color: inherit;
fill: currentcolor;
vertical-align: middle;
}
/**
* 1. The cross looks visually bigger than the checkmark so we adjust its
* size. This might not be needed depending on the icons.
*/
.Toggle__icon--cross {
color: #e74c3c;
font-size: 85%; /* 1 */
}
.Toggle__icon--checkmark {
color: #1fb978;
}
As mentioned previously, using a checkbox is not necessarily the most appropriate markup. If the toggle has an immediate effect (and therefore relies on JavaScript), and provided it cannot have an indeterminate state, then it should be a <button>
element with the aria-pressed
attribute instead.
Adrian Roselli has an insightful decision tree to pick between a checkbox and a button in his piece about toggles.
Fortunately, it is easy to adapt our code so it works all the same as a button. First, we tweak the HTML so the <label>
becomes a <button>
, and the <input>
is removed.
<button class="Toggle" type="button" aria-pressed="false">
<span class="Toggle__display" hidden>
<!-- The toggle does not change at all -->
</span>
This is the label
</button>
Then, we need to make sure our <button>
does not look like one. To do so, we reset the default button styles, including the focus outline since it is applied on the toggle instead.
/**
* 1. Reset default <button> styles.
*/
button.Toggle {
border: 0; /* 1 */
padding: 0; /* 1 */
background: transparent; /* 1 */
font: inherit; /* 1 */
}
/**
* 1. The focus styles are applied on the toggle instead of the container, so
* the default focus outline can be safely removed.
*/
.Toggle:focus {
outline: 0; /* 1 */
}
Then, we need to complement all our input-related selectors with a variation for the button variant.
+ .Toggle:focus .Toggle__display,
.Toggle__input:focus + .Toggle__display {
/* … */
}
+ .Toggle:focus:not(:focus-visible) .Toggle__display,
.Toggle__input:focus:not(:focus-visible) + .Toggle__display {
/* … */
}
+ .Toggle[aria-pressed="true"] .Toggle__display::before,
.Toggle__input:checked + .Toggle__display::before {
/* … */
}
+ .Toggle[disabled] .Toggle__display,
.Toggle__input:disabled + .Toggle__display {
/* … */
}
+ [dir="rtl"] .Toggle[aria-pressed="true"] + .Toggle__display::before,
[dir="rtl"] .Toggle__input:checked + .Toggle__display::before {
/* … */
}
That’s about it! This way, we can use either the checkbox markup or the button markup, depending on whats more appropriate for the situation, and have the same styles in both cases. Pretty handy!
As you can see, there is nothing extremely difficult with it but still a lot of things to consider. Here is what we’ve accomplished:
Pretty neat! Feel free to play with the code on CodePen and I hope this helps y’all making your toggles accessible. ✨ I recommend reading these articles to go further:
If I’m being honest, it wasn’t such a trivial piece of interface, so I want to go through how we built it—hopefully it helps others to make their geolocation widget clean and accessible.
In principle, this is not too complex. When interacting with the unique button, we:
While that sounds relatively straightforward, there are a lot of things that can go wrong here:
So we need to think about all this while building our little widget.
Because there are a lot of things to consider, the code is going to be large and complex. We wanted to make sure things remain approachable, especially if we have to maintain it further down the line. To do so, we extracted the geolocation logic into a hook (useGeolocation
), and every single state into its own visual component.
const GeoCheck = () => {
const isMounted = useIsMounted()
const [isPristine, setIsPristine] = React.useState(true)
const { permission, isEligible, hasErrored } = useGeolocation(isPristine)
if (!isMounted) return null
if (isPristine) return <GeoCheck.Pristine setIsPristine={setIsPristine} />
if (hasErrored) return <GeoCheck.Error />
if (isEligible) return <GeoCheck.Success />
if (isEligible === false) return <GeoCheck.Sad />
if (permission === 'denied') return <GeoCheck.Denied />
return <GeoCheck.Waiting permission={permission} />
}
Let’s break down what the component does in order:
isPristine
boolean to false
, and the useGeolocation
hook will react to this state change.There is not a whole lot going on in HTML, but still a few things worth pointing out. When clicking the button, it gets replaced with the loading state. For that reason, we need to move the focus to the container (hence the negative tabindex
), otherwise the focus gets lost entirely and a keyboard user will have to tab all the way to the widget.
<div tabindex="-1">
<p>Please wait, we are checking if we can deliver to you.</p>
</div>
We also mark the widget as loading via In his multi-function button article, Adrian Roselli explains how setting aria-busy
during waiting times. When supported, this can lead to assistive technologies waiting for aria-busy
being false to vocalize the new content.aria-busy
removes the element from the accessibility tree, therefore losing focus, so this is a bad idea.
The geolocation part was definitely the most tricky thing to do properly. We need our useGeolocation
hook to return 3 things:
Our hook accepts the isPristine
state from earlier, which is true
if the button has not been interacted with. When that state changes to false
, the hook starts doing its magic.
export const useGeolocation = isPristine => {
const [permission, setPermission] = useGeolocationPermission()
const [hasErrored, setHasErrored] = React.useState(false)
const [isEligible, setIsEligible] = React.useState(null)
React.useEffect(() => {
// Do the magic
}, [isPristine])
return { permission, hasErrored, isEligible }
}
It might be that we already have permission for the geolocation API, and that’s something we can check silently via the permission API. To avoid bloating our hook with more logic, we extracted the permission state into its own hook (useGeolocationPermission
).
If the permissions API is supported, we ask for the state of the geolocation permission, and store the result in our state. We also listen for any change on that permission to synchronize our state. If the permissions API is not supported however, then we have to assume we need to ask for the geolocation permission.
const useGeolocationPermission = () => {
const [permission, setPermission] = React.useState()
React.useEffect(() => {
if ('permissions' in navigator) {
navigator.permissions.query({ name: 'geolocation' }).then(result => {
setPermission(result.state)
result.onchange = () => setPermission(result.state)
})
} else setPermission('prompt')
}, [])
return [permission, setPermission]
}
The last piece of the puzzle is, well, the entire series of events in our main useEffect
. Let’s have a look at it first, then break it down to understand it better.
export const useGeolocation = isPristine => {
const [permission, setPermission] = useGeolocationPermission()
const [hasErrored, setHasErrored] = React.useState(false)
const [isEligible, setIsEligible] = React.useState(null)
React.useEffect(() => {
if (isPristine || permission === 'denied') return
getCoords()
.then(coords => {
setPermission('granted')
return coords
})
.then(getEligibility)
.then(setIsEligible)
.catch(error => {
if (error.code === 1) setPermission('denied')
else setHasErrored(true)
})
}, [isPristine, permission, setPermission])
return { permission, hasErrored, isEligible }
}
Alright, so we ask for permission if the button has been interacted with (as in not pristine) and we know the permission is not right off denied. From there, we are going to do a few things:
getCoords
function, shared below).granted
. This is for browsers that do not support the permissions API but do support geolocation, such as Safari.getEligibility
function shared below) and expect a boolean result.code
property set to 1
, it means it’s a permission error from the geolocation API, and we should update our permission state to reflect it. Otherwise, it’s most likely a HTTP or uncaught error, and we turn on our error state.Let’s have a look at our two utilities. First getCoords
, which is a thin wrapper around the geolocation API in order to “promisify” it. We also pass our options to it:
export const getCoords = () =>
new Promise((resolve, reject) => {
const getCoords = response => resolve(response.coords)
const options = {
timeout: 10000,
enableHighAccuracy: true,
maximumAge: 1000 * 60 * 5,
}
navigator.geolocation.getCurrentPosition(getCoords, reject, options)
})
Finally, our getEligibility
function does barely more than an HTTP request to our API:
export async function getEligibility({ latitude, longitude }) {
const query = `?lat=${latitude}&lng=${longitude}`
const response = await window.fetch(`/api/delivery_areas${query}`)
const data = await response.json()
return data?.served ?? false
}
That was quite a ride, but we’re pretty pleased with the result. It looks really nice, it works well (at least as far as we can tell 😅) and it helps our visitors figuring out whether they can benefit from our lightning fast groceries delivery!
]]>For good or for bad, we decided not to translate the word “rider” (as in, a delivery person delivering goods on a bike) in French. There are a few ways to translate it, such as “livreur·se” or “coursier·ère”, but we decided to land on the Anglicism “rider”, which (hopefully) is understandable enough.
Now, our top call-to-action on the home page states “Become a rider” in English. Once translated, it says “Devenir rider”. The problem is that “rider” means something in French, and it becomes “wrinkles.” That means the CTA essentially is pronounced as “Become wrinkled” by French screen-readers. Uh-oh.
We use POEditor to manage our translations. It’s a service making it possible for us to map translation keys to localised content. For security reasons, we do not allow translations to contain HTML. That means we needed to implement a fix in the frontend.
We had to be a little creative with the implementation. It’s not the cleanest, but it does the job relatively well. The main idea is that when translating a key into French, we check if the translation contains the word “rider” (or “riders”), and replace it with a span
with the lang
attribute set to en
.
<html lang="fr">
<body>
<a href="#">Devenir <span lang="en">rider</span></a>
</body>
</html>
Here is the implementation in approximate code, remove all React considerations for sake of simplicity:
// Assuming these global constants
const language = 'fr'
const translations = { 'home.riders.cta': 'Devenir rider' }
const regExp = /\b(riders?)\b/
const translate = term => {
const content = translations[term]
if (language === 'fr' && regExp.test(content)) {
return content.replace(regExp, '<span lang="en">$1</span>')
}
return content
}
translate('home.riders.cta')
// Devenir <span lang="en">rider</span>
This builds the pronunciation fix within our translating function so we don’t really have to think about it and it keeps working as we keep adding content. That’s pretty solid and does the job quite well!
Not so fast. I am not the most efficient person with VoiceOver, but I’m starting to slowly get the hang of it. Unfortunately, I could not really confirm that my fix worked. I tried changing my browser language, and playing with various settings, but no dice. The pronunciation remained fully French despite the span
marked English.
Fun fact: Yakim explained that there are 3 levels of languages. There is the system language, the language specified on the html document as well as the language setting in the VoiceOver rotor. That last one basically overwrites both the language setting on the system and the webpage.
Gijs Veyfeyken confirmed what I experienced: it turns out that VoiceOver cannot always switch language inside a link. Indeed, it works in reader mode (although that appears to depend on browsers) or when using a non-interactive element such a <p>
, but not when listing links for easy navigation.
Barry Pollard was kind enough to create some test cases for us to play with. The long story short is that:
<span>
or <i>
with a lang
attribute inside a <a>
does not work properly. The lang
attribute is basically ignored.<div>
with a lang
attribute inside a <a>
does work, provided the <div>
’s display is not set to inline
, or inline-block
. Unfortunately, that breaks styling.We ran these tests with VoiceOver on Brave, Firefox, iOS Safari and Safari. It turns out only desktop Safari handles all this properly. Using a <span>
with a lang
attribute inside a <a>
does not work consistently elsewhere and often the lang
has essentially no effect.
Laura Ciporen shares a similar experience with JAWS where language markup works fine in a heading but not when listing headings for easy navigation, in which case the language markup is gone.
Marking specific bits of content with a different language via the lang
attribute is a legit case, and how it should be done by the book. Unfortunately, in some situations such as in links, the lang
attribute will be essentially omitted.
If you have more information about this topic, feel free to share on Twitter!
]]>There are plenty of ways to achieve this. That’s also why we implemented it 3 times, every time with a slightly different twist. We eventually landed on something that’s relatively simple so I thought I’d share it (both vanilla JS and React).
Consider the following disclosure widget pattern:
<nav role="navigation">
<button
type="button"
id="nav-toggle"
aria-expanded="false"
aria-controls="nav-content"
>
Navigation
</button>
<div id="nav-content" aria-hidden="true" aria-labelledby="nav-toggle">
<ul>
<li><a href="#">Link 1</a></li>
<li><a href="#">Link 2</a></li>
<li><a href="#">Link 3</a></li>
</ul>
</div>
</nav>
Some <a href="#">other link</a> or whatever.
For more information about why we went with this particulary HTML structure, please refer to the comprehensive post about Gorillas’ navigation.
Now, with vanilla JavaScript, our implementation would look something like this:
const toggle = document.getElementById('nav-toggle')
const content = document.getElementById('nav-content')
const show = () => {
toggle.setAttribute('aria-expanded', true)
content.setAttribute('aria-hidden', false)
}
const hide = () => {
toggle.setAttribute('aria-expanded', false)
content.setAttribute('aria-hidden', true)
}
toggle.addEventListener('click', event => {
event.stopPropagation()
JSON.parse(toggle.getAttribute('aria-expanded')) ? hide() : show()
})
const handleClosure = event => !content.contains(event.target) && hide()
window.addEventListener('click', handleClosure)
window.addEventListener('focusin', handleClosure)
It works more or less like this: we listen for click events and focus change on the window object. When clicking or focusing an element that is not contained within the menu element, we close the menu. You’ll notice we don’t actually check whether the menu is open or not before we try closing it, because it makes little to no difference, performance wise.
One important thing to point out: we have to stop the propagationg of the click event on the toggle itself. Otherwise, it goes up to the window click listener, and since the toggle is not contained within the menu, it would close the latter as soon as we try to open it.
We originally used Event.composedPath
, which provides the DOM path from the root of the document to the event target. Unfortunately, we noticed it wasn’t supported in many cases, so we had to revisit the implementation.
Our implementation is actually in React, so I might as well share it. We use react-a11y-disclosure to handle the disclosure pattern for us, but I skipped it here for sake of simplicity.
const useAutoClose = ({ setIsOpen, menu }) => {
const handleClosure = React.useCallback(
event => !menu.current.contains(event.target) && setIsOpen(false),
[setIsOpen, menu]
)
React.useEffect(() => {
window.addEventListener('click', handleClosure)
window.addEventListener('focusin', handleClosure)
return () => {
window.removeEventListener('click', handleClosure)
window.removeEventListener('focusin', handleClosure)
}
}, [handleClosure, menu])
}
const Menu = props => {
const menu = React.useRef()
const [isOpen, setIsOpen] = React.useState(false)
useAutoClose({ setIsOpen, menu })
return (
<nav role='navigation'>
<button
type='button'
id='nav-toggle'
aria-expanded={isOpen}
aria-controls='nav-content'
onClick={event => {
event.stopPropagation()
setIsOpen(isOpen => !isOpen)
}}
>
Navigation
</button>
<div id='nav-content' aria-hidden={!isOpen} aria-labelledby='nav-toggle'>
<ul>
<li>
<a href='#'>Link 1</a>
</li>
<li>
<a href='#'>Link 2</a>
</li>
<li>
<a href='#'>Link 3</a>
</li>
</ul>
</div>
</nav>
)
}
That’s it. Relatively simple in the end. For a more comprehensive solution, one might want to check react-outside-click-handler from AirBnB but truth be told, I don’t know what it does that this solution doesn’t do. Anyway, I hope it helps! 💖
]]>The idea is to intercept runtime JavaScript errors, and reload the page with a query parameter which causes the JavaScript bundles not to be rendered, thus simulating a no-JavaScript experience. This way, the user can browse the no-JS version instead of being stuck on a broken page.
I recently announced Gorillas’ new website built with Next, which almost fully support JavaScript being disabled. So I was eager to try add a similar error recovery feature.
While we do use Next, we do not use the runtime. We essentially use Next as a static site generator. When deploying the site, we build all pages statically (with Next’ static HTML export), and serve them via Netlify. There is no Node server or anything like that. It’s just a bunch of HTML files eventually enhanced by a client-side React application.
This means that the HTML files do contain <script>
tags at the bottom of the body element to execute our bundles. We can’t decide not to render them because, once again, this is all just static files—there is no running server that can modify the response.
So that’s not even really Next’s fault per se. Any static site generator would have the same problem. Once the browser receives the HTML response, it’s done, we can’t modify it. It will read the <script>
tags, download the files, parse them and execute them. So… rough one to solve I guess.
As mentioned in the update at the top of the article, this solution is a hack at best, and I came up with a better solution thanks to Maximilian Fellner’s hints. Do not implement this window.close()
hack and take the <template>
route instead.
If we can’t do anything about the script tags being rendered in the HTML response, maybe we can prevent the browser from executing them? Well, again, not really. Browsers do not offer a fine-grained API into their resources’ system to tell them to ignore or prioritize certain assets.
Did you know about window.stop()
though? ‘Cause I didn’t until today. That’s a method on the window
object that essentially does what the “Stop” button from the browser does. Quoting MDN:
The
window.stop()
[function] stops further resource loading in the current browsing context, equivalent to the stop button in the browser. Because of how scripts are executed, this method cannot interrupt its parent document's loading, but it will stop its images, new windows, and other still-loading objects.
What if we called window.stop()
before the browser reaches the <script>
tags rendered by <NextScript />
? Let’s try that by updating ./pages/_document.js
(see Custom Document
in Next’s documentation):
class MyDocument extends Document {
static getInitialProps(ctx) {
return Document.getInitialProps(ctx)
}
render() {
return (
<Html>
<Head />
<body>
<Main />
{/* Trying to prevent <script> elements rendered by
`<NextScript />` from being executed. The proper
condition will be covered in the next section. */
<script dangerouslySetInnerHTML={{ __html: `
if (true) window.stop()
` }} />
<NextScript />
</body>
</Html>
)
}
}
Performing a Next export and serving the output folder before loading any page yields positive results: not only are the <script>
tags not executed, but they’re not even rendered in the dev tools. That’s because window.stop()
literally killed the page at this point, preventing the rest of the document from being rendered.
<script>if (true) window.stop()</script>
</body>
</html>
Of course, we do not want to always prevent the scripts’ execution. Only when we’ve captured a JavaScript error and reloaded the page with a certain query parameter. To do that, we need an error boundary.
class ErrorBoundary extends React.Component {
componentDidCatch(error, info) {
const { pathname, search } = window.location
window.location.href =
pathname + search + (search.length ? '&' : '?') + 'no_script'
}
render() {
return this.props.children
}
}
We can render that component around our content in ./pages/_app.js
(see Custom App
in Next’s documentation).
function MyApp({ Component, pageProps }) {
return (
<ErrorBoundary>
<Component {...pageProps} />
</ErrorBoundary>
)
}
Finally, in our ./pages/_document.js
, we can check for the presence of this URL parameter. If it is present, we need to stop the execution of scripts.
class MyDocument extends Document {
static getInitialProps(ctx) {
return Document.getInitialProps(ctx)
}
render() {
return (
<Html>
<Head />
<body>
<Main />
<script dangerouslySetInnerHTML={{ __html: `
if (window.location.search.includes('no_script')) {
window.stop()
}
` }} />
<NextScript />
</body>
</Html>
)
}
}
That’s it, job done. Hacky as hell, but heh. It seems to work okay. For the most part at least, as it has some potentially negative side-effects: any ongoing request, such as for lazy loaded images, will be interrupted. That can cause some images not to render. Still better than a broken page due to a JavaScript error in my opinion, but I guess the choice is yours.
Alright people, lay it on me. How bad is this, and how ashamed shall I be?
Maximilian Fellner was kind enough to take the time to build a demo of a way to inject Next scripts dynamically. The solution is a little complicated so I won’t go into the details in this article—feel free to check Maximilian’s proof of concept. Thanks for the hint Max!
Building on top of his work, I figured out a rather elegant way forward. Instead of rendering <script>
tags and then trying to remove or not to execute them when the no_script
parameter is present, let’s turn it around. Let’s not render the <script>
tags, and only dynamically inject them at runtime when the no_script
URL parameter is absent.
However, Next does not provide a built-in way to know what scripts should be rendered on a given page, or what are their paths. There is no exposed asset manifest or anything like this. So what we can do is render them within a template. If you are not familiar with the <template>
HTML element, allow me to quote MDN:
The HTML Content Template (
<template>
) element is a mechanism for holding HTML that is not to be rendered immediately when a page is loaded but may be instantiated subsequently during runtime using JavaScript.
class MyDocument extends Document {
static getInitialProps(ctx) {
return Document.getInitialProps(ctx)
}
render() {
return (
<Html>
<Head />
<body>
<Main />
<template id='next-scripts'>
<NextScript />
</template>
</body>
</Html>
)
}
}
Perfect. Now, all we need is a little JavaScript snippet to effectively properly render these <script>
tags if the no_script
URL parameter is not present.
const scriptInjector = `
if (!window.location.search.includes('no_script')) {
var template = document.querySelector("#next-scripts")
var fragment = template.content.cloneNode(true)
var scripts = fragment.querySelectorAll("script")
Array.from(scripts).forEach(function (script) {
document.body.appendChild(script)
})
}
`.trim()
class MyDocument extends Document {
static getInitialProps(ctx) {
return Document.getInitialProps(ctx)
}
render() {
return (
<Html>
<Head />
<body>
<Main />
<template id='next-scripts'>
<NextScript />
</template>
<script dangerouslySetInnerHTML={{ __html: scriptInjector }} />
</body>
</Html>
)
}
}
Boom, job done. If the no_script
URL query parameter is present, the script will do nothing, effectively mimicking a no-JavaScript expperience. If it is not, it will load Next bundles, just like normal.
A few days ago, I posted a few tweets about the new Gorillas’ website. It’s a pretty simple site at this stage: a couple pages, not much interaction, mostly there to showcase Gorillas’ branding as we expand rapidly across Europe (check it out if you can, it’s good stuff ✨).
One of the most interesting part of the site—at least from a technical standpoint—has to be the navigation. So I thought I’d write a short piece about everything that went into it, from accessibility to behaviour to design.
For good or for bad, the navigation is some sort of a dropdown. It means it’s not just a few links at the top of the page, and there are therefore a few things we had to consider.
We figured the disclosure widget pattern was the appropriate choice for the navigation. Basically, you have a <button>
which controls the visibility of an adjacent container. The toggle contains visually hidden text to mention what it’s for, since its only visible content is the brand logo.
For when JavaScript is not available, we originally intended to have an anchor link sending to the footer, since most if not all pages are linked from there as well.
Then I thought about using <details>
and <summary>
since we have pretty loose browser support expectations. It gives us a disclosure widget without needing any JavaScript, which is pretty great. We just had to tweak the styles a little to hide the default arrow and make it a bit more integrated.
/**
* 1. Remove the <summary> arrow in Firefox.
*/
summary {
display: block; /* 1 */
cursor: pointer;
}
/**
* 1. Remove the <summary> arrow in all other browsers.
*/
summary::-webkit-details-marker {
display: none; /* 1 */
}
September 20th edit: using <details>
and <summary>
for a navigation menu is not fantastic, as outlined by Gerardo Rodriguez and Adrian Roselli. Because it gets progressively enhanced into a proper disclosure widget when JS kicks in, it may be fine, but generally speaking this is not the right approach. I did not know this at the time.
As much as I love <details>
and <summary>
, they’re also not perfect for a navigation, because clicking elsewhere or tabbing out of it does not close it.
That’s why when JavaScript is available, we replace them with a <button>
(with aria-controls
and aria-expanded
) and a <div>
with (aria-hidden
and aria-labelledby
), so we can have more control over the behaviour—particularly when to close the menu.
<nav role="navigation">
<button
type="button"
aria-controls="menu"
aria-expanded="false"
id="menu-toggle"
>
<svg aria-hidden="true" focusable="false"><!-- Logo --></svg>
<span class="sr-only">Navigation</span>
</button>
<div id="menu" aria-labelledby="menu-toggle" aria-hidden="true">
<!-- Navigation content -->
</div>
</nav>
Interesting point raised by Aurélien Levy on Twitter: When using aria-expanded="true"
, the label should not mention “open” or “close” (or similar) as the state is already conveyed via the attribute.
Without getting too deep into technical details (especially because our implementation is in React), we use something along these lines to automatically close the menu when clicking outside of it or tabbing out of it.
const menu = document.querySelector('#menu')
const handleFocusChange = event => {
if (isOpen && !event.composedPath().includes(menu)) setIsOpen(false)
}
window.addEventListener('click', handleFocusChange)
window.addEventListener('focusin', handleFocusChange)
Because landmarks such as <nav>
can be listed by assistive technologies, it’s important that the <nav>
itself is not the element whose visibility is being toggled. Otherwise, it’s undiscoverable when hidden, which is not good. Instead, the <nav>
should be the surrounding container, always visible, and contains the button to toggle a list’s visibility.
Simplified and using <details>
and <summary>
in this example for sake of clarity:
<nav role="navigation">
<details>
<summary>
<svg aria-hidden="true" focusable="false"><!-- Logo --></svg>
<span class="sr-only">Navigation</span>
</summary>
<ul>
<!-- Navigation content -->
</ul>
</details>
</nav>
What’s a little subtle about the design is that the list doesn’t have a solid background—it’s a heavy blur, which gives a slight shade on top of the thick white on black typography underneath.
Fortunately, CSS now has a backdrop-filter
property, which enables us to apply filter to the background of an element (in opposition to filter
which applies to its entirety). We still need to make sure things look okay if the property is not supported though. @supports
to the rescue!
#menu {
background-color: #201e1e;
color: #fff;
}
@supports (backdrop-filter: blur(1px)) or (-webkit-backdrop-filter: blur(1px)) {
#menu {
background-color: rgb(255 255 255 / 0.15);
-webkit-backdrop-filter: blur(40px);
backdrop-filter: blur(40px);
}
}
This property is not without pitfalls though. Because the actual list is absolutely positioned, we could not set the backdrop filter on the <nav>
container, because the list would end up with no background when open (against, absolutely positioned).
We thought about setting it on both the nav and the list, but for some awkward reasons, nested backdrop-filter do not work on Chrome, the list end up with no blur—fine on Safari though. Don’t ask me why.
So we ended up applying the filter on both the toggle and the list (thus covering the whole nav area with blur). As a result, we unfortunately end up with a thin yet noticeable line where the 2 blur areas meet. Sad, but I guess there is no way out.
Another difficulty caused by the list being absolutely positioned is handling rounded corners. I know, right? Who knew rounded corners could be difficult? Slap a little border-radius
on this baby and call it done. Well, unfortunately not.
The toggle has the so-called “pill” style. The corners are soft and fully embrace the shape, like one of these glossy pills. When opening the menu, the top corners stay the same, but the bottom corners of the toggle become sharp to blend in with the top corners of the list, and the curves move to the bottom corners of the list. And this needs to be handled both with and without JavaScript.
details[open] > summary {
border-radius: 30px 30px 0 0;
}
This design detail also makes animating the menu opening quite difficult, so much so that we didn’t bother with it.
Of course, we should not forget to add a skip link so people using a keyboard and/or assistive technologies can quicky access the main content without having to go through the navigation.
What’s relatively interesting about having the navigation within a disclosure widget is that the links are not focusable until the nav is being displayed, so the skip link itself loses value.
What I like about the skip link on Gorillas’ website is that it fits nicely into the navigation design, which is not always the case. More often than not, skip links end up being floating at the top of the page, a little oblivious to what’s happening around. Here, it fits nicely within the toggle area.
While it will most likely move out of the navigation soon as we add more and more languages, the language switcher currently lives within the navigation menu. Although, I suppose we are playing fast and loose with the word “switcher” here since it’s just a few hyperlinks to the different versions of the website.
Still, a few things we paid attention to:
While the links say “EN”, “DE” and “NL”, it’s not fantastic from a verbal perspective. “EN” and “NL” are pronounced as you would expect, but “DE” is pronounced “duh”, which sucks. I assume most screen-reader users would be accustomed to this sort of pronunciation for language code, but we wanted to do better. The 2-letter code is marked with each link contains visually hidden text mentioning the full language name.aria-hidden
so it’s not read out and
Aurélien Levy rightfully pointed out on Twitter that marking the 2-letter code as aria-hidden
would fail WCAG SC 2.5.3 Label in name. As the visible label is, say, “EN”, voice navigation users can activate it using a command like “click EN”. It will not work anymore if the “EN” text is hidden with aria-hidden
. Sara Soueidan expands on the matter in her own blog.
The language name is defined in the language itself, and not in the current page language. Browsing the English navigation will read “Deutsch” for the German link, and not “German”. So it’s pronounced correctly, the language name is wrapped with a span with the lang
attribute. This way, a screen-reader will switch to a German pronounciation to voice “Deutsch”.
Each link to an alternative version has the hreflang
attribute to inform that the content of the page behind the link will be in a certain language. There is little information about the hreflang
attribute on links out there, so it might do basically nothing. I’m not sure.
The separators between all 3 links are marked with aria-hidden
since they are strictly decorative. They could have been made with CSS as well, but it was a little less convenient.
We did not fall into the trap of using flags to represent languages, since flags are ultimately for countries, and not languages. While we often associate a country and a language, this thinking line falls short for many countries and languages.
This is basically what it looks like in the end:
<a href="/en" hreflang="en">
EN<span class="sr-only" lang="en"> — English</span>
</a>
<span aria-hidden="true">/</span>
<a href="/de" hreflang="de">
DE<span class="sr-only" lang="de"> — Deutsch</span>
</a>
<span aria-hidden="true">/</span>
<a href="/nl" hreflang="nl">
NL<span class="sr-only" lang="nl"> — Nederlands</span>
</a>
That’s about it, folks! As I said, this is nothing too spectacular, but it’s still interesting how many little considerations can go into such a simple interface module. I hope this helps you. If you have any suggestion or comments, please get in touch on Twitter. :)
]]>This meant updating some things on the interwebs so that Kitty becomes more prominent, such as links, social profiles… Except I have been pretty active for the past decade and it turns out it’s pretty hard to do. 😅 In this short piece, I’ll just walk through a few of the steps I had to take.
If you ever mentioned me somewhere on your blog, site or platform, I would really appreciate if you could take a minute to update my name to Kitty. Thank you for your help! 🙏
I bought kittygiraudel.com a while ago, and because I particularly dislike dealing with domain names and DNS configuration, at that time I simply put a small redirect on it so it leads to my website. I recently flipped both domains so that kittygiraudel.com is effectively the one that’s used, and the other one redirects to it.
I then proceded to a domain address change on Google’s side so they stop indexing the old domain name, and migrate any index onto the new one. I have to admit that their tool is surprisingly straightforward, although I have yet to see if it works properly since the migration is still running. It could take weeks to take full effect, so I have to be patient.
Many thanks Valérian Galliat for his kind help with the DNS configuration. Thanks to him, it was smooth and seamless.
Twitter has some documentation on changing one’s handle. This is something one can do directly from within the account settings. Nothing too complicated.
Note: Changing your username will not affect your existing followers, Direct Messages, or replies. Your followers will simply see a new username next to your profile photo when you update. We suggest you alert your followers before you change your username so they can direct replies or Direct Messages to your new username. Additionally, please note that once you change your username, your previous username will immediately be available for use by someone else.
Two important pieces of information in there:
What I ended up doing was opening the Twitter sign-up process in another browser, rename my account to @KittyGiraudel to free the old handle, and immediately create another account under the old name. That placeholder account has my face, bio and a link to the new account so anyone following an old mention can still find me.
GitHub was a bit of an odd bird, because I already had two accounts, under both names. My main account with a decade of open-source work under the old name, and a fresh account for work under the new name. What I wanted was to end up with my main account, but under the KittyGiraudel username.
I contacted their support to know what was the best way to merge both accounts, and they have been very helpful. Basically they recommend migrating any repository I might have onto the main account (I had none, so that was easy), then deleting the empty account to free the username before finally renaming the account into KittyGiraudel. Very easy.
What is great about GitHub is that they maintain relevant redirects. All the links to my repositories using my old GitHub handle still work, because they redirect to the new one. Even the few sites using GitHub Pages got instantly migrated as well, no downtime. A+.
Bonus point: Commits and contributions are tied to an email address, so by adding the new email address I was using onto my main account, I managed to merge the stats from both accounts. Not only that, but the authorship of all my commits done under the name KittyGiraudel before deleting the account remained clean.
Updating all the name references and URLs on that site was easy. It’s basically a search and replace away. But I have been writing for many different news outlets and blogs over the years, and having one’s details updated everywhere is an absolute pain.
For larger blogs like SitePoint, CSS-Tricks or Codrops, I could edit the profile myself to update my content. When updating the URL slug was not possible, I had to contact the platform owner for them to do it.
For smaller sites, I had to contact authors’ individually to ask them to perform updates for me. I went as far as back page 10 of a Google search on my name to find references. It’s a never-ending battle.
I wrote two books. Like, paper books. This ain’t going away. And the thing about the publishing industry is that there are literally dozens of site that index books, including their authors, and on which no one has any control. Think of all the library websites, book shops and so on.
There are also a lot of development-related websites which are now long dead, but still maintain people’s profile, such as cssdeck.com. It’s annoying, but I guess there is not much one can do about it.
Long story short: there is no way to change everything easily and conveniently. It’s a lot of requests here and there to have people fix things. I assume it will get better as search engines Google stops indexing the old content. Now we play the waiting game.
[T]he page is […] reloaded when following a link and the focus is restored to the top of the page. When navigating with [assistive technologies], that means having to tab through the entire header, navigation, sometimes even sidebar before getting to access the main content. This is bad. […] To work around the problem, a common design pattern is to implement a skip link, which is an anchor link sending to the main content area.
So that’s all great. Except a skip link is not so trivial to implement. To name just a few constraints it should fulfill:
It’s not massively complex, but it’s also easy to screw up. And since it’s required on every single website, it begs the question… Why don’t browsers do it natively? That is something I suggested to the Web We Want back in December.
It’s not hard to imagine how browsers could handle this natively, with little to no control left to web developers in order to provide a better and more consistent experience for people using assistive technologies.
When tabbing out of the browser’s chrome and into a web page (or using a dedicated short key), the browser would immediately display the skip link knowing that:
It would be inserted as the very first element in tab order.
It would use the browser’s language, which might not necessarily be the page’s language.
It would technically be part of the browser’s interface and not part of the website. So this would be styled according to the browser’s theme.
It would not be accessible (in the strict meaning of the term) by the web page, on purpose.
It would be rendered on top of the page in order not to risk breaking the layout.
The main idea is to have little to no control about it. The same way developers do not have a say on how the browsers’ tabs or address bar look and behave. That being said, the target for the link should be configurable.
A sensible default would be to point to the <main>
element since it is unique per page and is explicitly intended to contain main content.
The main content of the body of a document or application. The main content area consists of content that is directly related to or expands upon the central topic of a document or central functionality of an application.
— W3C HTML Editors Draft
Not all websites use the <main>
element though. I assume browsers could have some baked-in heuristics to figure out what is the main content container, but perhaps that falls outside of the scope of this feature suggestion.
Therefore, providing a way for web developers to precisely define which container really is the main one, a <meta>
tag could be used. It would accept a CSS selector (as simple or complex as it needs to be), and the browser would query that DOM node to move the scroll + focus to it when using the skip link.
<meta name="skip-link" value="#content" />
Another approach would be to use a <link>
tag with the rel
attribute, as hinted by Aaron Gustafson.
<link rel="skip-link" href="#content" />
Whether browsers should listen to changes for this element (whichever may that be) is an open question. I would argue that yes, just to be on the safe side.
Would browsers implement skip links natively, what would happen of our existing custom ones? They would most likely not bother all that much.
Tabbing within the web content area would display the native skip link. If used, it would bypass the entire navigation including the custom skip link. If not, the next tab would be the site’s skip link, which would be redundant, but not critical either.
Ideally, the browser provides a way to know whether that feature is supported at all so skip links can be polyfilled for browsers not supporting them natively yet. This would most likely require JavaScript though.
if (!window.navigator.skipLink) {
const skipLink = document.createElement('a')
skipLink.href = '#main'
skipLink.innerHTML = 'Skip to content'
document.body.prepend(skipLink)
}
This is by no mean perfect. I don’t have a bulletproof solution to offer. And if there was one, I’m certain people way smarter and more educated than I am would have offered it already.
Still, the lack of skip links represent a significant accessibility impediment to people using assistive technologies to browse the web. And considering every website needs one, with little to no variation from website to website, it does feel like something browsers could do on their side.
As always, feel free to share your thoughts with me on Twitter. :)
]]>In this article, I want to discuss all the ways to hide something, be it through HTML or CSS, and when to use which. Feel free to jump to the summary.
Method | Visible | Accessible |
---|---|---|
.sr-only class |
No | Yes |
aria-hidden="true" |
Yes | No |
hidden="" |
No | No |
display: none |
No | No |
visibility: hidden |
No, but space remains | No |
opacity: 0 |
No, but space remains | Depends |
clip-path: circle(0) |
No, but space remains | Depends |
transform: scale(0) |
No, but space remains | Yes |
width: 0 + height: 0 |
No | No |
content-visibility: hidden |
No | No |
.sr-only
classThis combination of CSS declarations hides an element from the page, but keeps it accessible for screen readers. It comes in very handy to provide more context to screen readers when the visual layout is enough with it.
This technique should only be used to mask text. In other words, there shouldn’t be any focusable element inside the hidden element. This could lead to annoying behaviours, like scrolling to an invisible element.
Summary:
Verdict: 👍 Great to visually hide text content while preserving it for assistive technologies.
aria-hidden
attributeThe aria-hidden
HTML attribute, when set to true
, hides the content from the accessibility tree, while keeping it visually visible. It stays visible because browsers do not apply styles to elements with aria-hidden="true"
so this only impacts the accessibility tree.
It is important to note that any focusable elements within an element with aria-hidden="true"
remains focusable, which can be a big problem for screen readers. Make sure there are no focusable elements within such container and that the element itself is also not focusable either (see the fourth rule of ARIA).
Summary:
aria-describedby
and aria-labelledby
)Verdict: 👍 Great to hide something from assistive technologies while keeping it visually displayed. Use with caution.
display: none
declaration and the hidden
attributeThe display: none
declaration and the hidden
HTML attribute do the same thing: they visually remove an element from the rendering tree and from the accessibility tree.
What’s nice about the hidden
attribute is that you can mask content entirely through HTML without having to write any CSS, which can be handy in some contexts.
Interesting fact shared by Aurélien Levy: removed content with these methods can still be vocalized when referenced via aria-describedby
or aria-labelledby
. This can be handy to avoid double-vocalization. For instance, if a field references a text node via aria-describedby
, this content can safely be hidden (with hidden
, display: none
or even aria-hidden="true"
) so that it cannot be discovered normally, but still be announced when the field is focused.
Summary:
aria-describedby
and aria-labelledby
)Verdict: 👍 Great to hide something from both assistive technologies and screens.
visibility: hidden
declarationThe visibility: hidden
CSS declaration visually hides an element without affecting the layout. The space it takes remains empty and surrounding content doesn’t reflow in its place.
From the accessibility perspective, the declaration behave like display: none
and the content is removed entirely and not accessible.
Summary:
Verdict: 👍 Good when display: none
is not an option and the layout permits it.
opacity: 0
, clip-path: circle(0)
declarationsThe opacity: 0
and clip-path: circle(0)
CSS declarations visually hide an element, but the place it takes is not freed, just like visibility: hidden
.
Whether the content remains accessible depends on assistive technologies. Some will consider the content inaccessible and skip it, and some will still read it. For that reason, it is recommended not to use these declarations to consistently hide content.
Summary:
Verdict: ✋ Shady and inconsistent, so don’t use it except maybe for visual animations purposes.
transform: scale(0)
declarationThe transform: scale(0)
CSS declaration visually hides an element, but the place it takes is not freed, just like visibility: hidden
, opacity: 0
and clip-path: circle(0)
.
The content remains accessible to screen readers though.
Summary:
Verdict: ✋ Restrict for visual animations purposes.
width: 0
and height: 0
declarationsResizing an element to a 0x0 box with the width
and height
CSS properties and hiding its overflow will cause the element not to appear on screen and as far as I know all screen readers will skip it as inaccessible. However, this technique are usually considered quite fishy and could cause SEO penalties.
Summary:
Verdict: 👎 Unclear and unexpected, risky from a SEO perspective, don’t.
content-visibility: hidden
declarationThe content-visibility
CSS property was introduced as a way to improve performance by hinting the browser (Chrome, as of writing) to skip rendering of a certain element until it is within the viewport.
Content made hidden with content-visibility: hidden
will effectively be absent from the accessibility tree entirely (just like with display: none
). This is not necessarily intended behaviour though, and for that reason it is recommended not to use that declaration on landmarks.
Summary:
Verdict: 👎 Poor support, poorly implemented, don’t.
Generally speaking, you want to avoid having too many discrepancies between the visual content, and the underlying content exposed to the accessibility layer. The more in sync they are, the better for everyone. Remember that a clearer visual interface with more explicit content benefits everyone.
If you need to hide something both visually and from the accessibility tree, use display: none
or the hidden
HTML attribute. Valid cases: show/hide widget, offscreen navigation, closed dialog.
If you need to hide something from the accessibility tree but keep it visible, use aria-hidden="true"
. Valid cases: visual content void of meaning, icons.
If you need to visually hide something but keep it accessible, use the visually hidden CSS declaration group. Valid cases: complementary content to provide more context, such as for icon buttons/links.
Quick reminder of what we should do, courtesy of Leonie Watson’s article on the matter:
<span role="img" aria-label="Star" title="Star">⭐️</span>
This is easy to do in content pages authored in HTML, but becomes more complicated in articles written in Markdown, let alone done retroactively on hundreds of pages. So the idea is to post-process the resulting HTML to wrap emojis with a span as shown above.
Fortunately, 11ty allows us to post-process HTML with transforms. They are very handy to, well, transform a template’s output, such as minifying the resulting HTML for instance.
Here, we want a transform that will:
Let’s start by creating the boilerplate for our transform:
eleventyConfig.addTransform('emojis', (content, outputPath) => {
return outputPath.endsWith('.html') ? wrapEmojis(content) : content
})
function wrapEmojis(content) {
// Our code here
}
Finding emojis is surprisingly easy thanks to Mathias Bynens’ emoji-regex
. This package provides an automatically generated (long) regular expression to find emoji unicode sequences.
From there, we can already wrap our emojis:
// The package exports a function, not a regular expression, so we have to
// call it first to get the regular expression itself.
const emojiRegex = require('emoji-regex/RGI_Emoji')()
function wrapEmojis(content) {
return content.replace(
emojiRegex,
match => `<span role="img">${match}</span>`
)
}
Now we need to figure out the English label a given emoji. It turns out that this is surprisingly difficult. Mathias Bynens explains why:
It’s trickier, as it’s not obvious what the expected output is for many emoji. Should you just use the Unicode names? What about sequences? etc.
Nevertheless, I found emoji-short-name, which is based on emoji-essential, which is a scrap of Unicode.org. This package gives us the English description of an emoji.
// The package exports a function, not a regular expression, so we have to
// call it first to get the regular expression itself.
const emojiRegex = require('emoji-regex/RGI_Emoji')()
const emojiShortName = require('emoji-short-name')
function wrapEmojis(content) {
return content.replace(emojiRegex, wrapEmoji)
}
function wrapEmoji(emoji) {
const label = emojiShortName[content]
return label
? `<span role="img" aria-label="${label}" title="${label}">${emoji}</span>`
: emoji
}
That’s about it! As I said, pretty cheap to implement. Now I’m going to be honest and don’t know how robust this solution is. Some emojis might be missing (especially when new ones get added) and some descriptions might be sub-optimal. Additionally, it doesn’t check whether an emoji is already properly wrapped, which could cause a double-wrap (although I’d say this could be fixed relatively easily I guess).
Still, it’s a pretty convenient way to make emojis a little more accessible with 11ty!
]]>I am by no mean a avid reader, but every now and then I find a book I like and I cannot get enough of it. Here are the last few books I’ve read that I would absolutely recommend to all. :)
The End of Everything (Astrophysically Speaking) is a masterpiece from Dr. Katie Mack, a theoretical astrophysicist who studies a range of questions in cosmology, the study of the universe from beginning to end.
In this book that makes no asumption about the reader’s knowledge on astophysics, she takes us through existing theories about how everything will end, from the big rip to the big crunch. On top of being an accomplished astrophysist, Katie is a great writer, witty and engaging, making the whole piece an absolute delight to read.
If you are still not convinced, feel free to read the book’s reviews, all more stellar than the other.
Mathematics never were my passion. I always failed to appreciate their usefulness, and they never came easy to me. Back in high-school, math exams were my dread and to this day, I cannot say I particularly enjoy solving problems revolving around maths. Still, I devoured this book.
Alex Bellos takes us through the history of maths and their applications in everyday’s topics. Far away from being a geeky book for mathletes, Alex’s Adventures in Numberland feels more like a collection of anecdotes about mathematics and geometry, from ancient civilisations to the place they hold in our society, whose central theme slowly unravel throughout the book.
The Guardian published a review of the book back in 2011 shortly after it got released. It might be worth reading if you’re not sure this book is for you.
I will never stop recommending Technically Wrong by Sara Wachter-Boettcher, because it is an eye-opening fantastic piece on everything that’s wrong with the tech industry—from racist artificial intelligence algorithms to ethical conundrums, from corporate failures to systemic issues.
Published in 2017, it still completely holds up and everyone working or even simply being interested in tech and its supposed promise to save the world should read it.
]]>🤯 February 24th. I worked on a fun little project called Selectors Explained. It’s a teeny-tiny site explaining a CSS selector in plain English.
🐈 April 17th. One of my cats passed away after several health complications. He’s been with me for almost 10 years, and it’s been difficult saying goodbye. Being always at home and missing his presence did not make things any easier.
🐱 October. I started using “Kitty” instead of my legal first name as a semi-official name, starting with many online services. I also bought kittygiraudel.com.
🌱 November 15th. This day marked 2 years for me without eating meat and one year without fish.
💻 November 23rd. After having my MacBook Pro since early 2014, I decided it was time to buy a new one. It’s not much, but it was quite a big decision to buy a new laptop, so it’s something!
👋 November 27th. My last day at N26, after over 4 years as the lead web engineer. It’s been bittersweet leaving the team, but I’m excited to join Gorillas in January.
⚙️ November 30th. I moved my site from Jekyll to 11ty, after over 8 years running on Jekyll. Very happy with my decision though!
📆 December. I participated in and finished the Advent of Code, for the first time. It was a lot of fun but was only doable because I didn’t work in December. I also ran my own advent calendar about accessibility.
🌟 December 12th. I got the opportunity to write for CSS-Tricks once more, this time against gatekeeping. This is humbling because in many ways Chris Coyier is the person who put me on the map, so to say, by letting me write on CSS-Tricks at the beginning of my career and being an early CodePen adopter.
Whether it is because I was less busy at work or had time off in December, it turns out I have done some contributions to open-source in 2020. Namely:
Heh. I’m not sure there is much point setting up goals since I don’t give myself the drive to pursue them. One thing though: my partner gave me a kit to make cheese, and I intend to use it. 🧀
Also, I hope I can keep writing as much as I did in 2020. After basically no output in 2019 (3 articles total), I wrote 32 articles across 2020—one of them being the #A11yAdvent calendar which eventually got split into 24 smaller entries because it was too big. Let 2021 continue on that track.
]]>I originally wrote a long article about my thoughts on every individual puzzle, but I realised truly nobody cared, so I’m swapping it for a quick draft of my thoughts on the event as a whole. If you want to see my code, it’s in my advent-of-code GitHub repo.
I won’t be giving any explicit answer or solution here, but I do mention tricks or things to pay attention to I guess spoiler alert.
In the early days, I stumbled upon a problem where my tests were passing, but I could not get the correct result for my puzzle input. It turns out I had a trailing line in my input file, leading to hard-to-track errors down the line. I then tweaked my function which reads the input file so it trims the data:
const fs = require('fs')
const path = require('path')
module.exports = (dir, delimiter = '\n') =>
fs
.readFileSync(path.resolve(dir + '/input.txt'), 'utf8')
.trim()
.split(delimiter)
Each puzzle goes like this: you have a short story which contains rules. These rules are not always super straightforward (I’m looking at you day 17 and your obscure example), so it is important to read them carefully in order not to miss any subtlety. The first part is usually relatively easy to reach regardless of the implementation. Then the 2nd part tends to be more demanding and some code might need to be rewritten.
I felt like a lot of the difficulty came from performance (or lack thereof). Most puzzles are relatively straightforward to solve, but when pushed to the extreme in part 2, a naive approach tends to be too slow. It wasn’t uncommon to have million (or more) iterations or recursions, which eventually becomes quite computer intensive.
For instance, both day 15 and day 23 were infinite number games, which were simple and quick in part 1 but required computing a very large number of rounds (10,000,000 if I’m not mistaken) for part 2. The naive array-based implementation worked fine to begin with, and completely collapsed later on when it cannot output a result within hours (!!). Rewriting the code using a hash table (such as an object or a Map
) yields dramatic performance improvements, solving the puzzle within 10 seconds. Rewriting the code again using an UInt32Array
brings down computation time within a single second.
Not everything has to be brute-forced, but ultimately everything is. Some puzzles could be solved efficiently in very clever ways such as using the Chinese Remainder Theorem in day 13, or bitwise operators in day 14, but unless one has some relatively advanced math and/or computer science knowledge, such solution is most likely out of reach. As a result, we resort to brute-forcing, and this is when performance can be an issue—because these problems are better solved otherwise.
Test-driven implementation truly is a blessing for this event because the daily puzzles contain short data samples and their expected results. My approach was always to write the unit tests (with Ava) for the samples, then write the code until the tests pass, and finally run the code on my puzzle input.
const test = require('ava')
const { getGameScore, fightRecursive } = require('.')
const input = require('../helpers/readInput')(__dirname, '\n\n')
const example = ['Player 1:\n9\n2\n6\n3\n1', 'Player 2:\n5\n8\n4\n7\n10']
test('Day 22.1', t => {
t.is(getGameScore(example), 306)
})
test('Day 22.2', t => {
t.is(getGameScore(example, fightRecursive), 291)
})
test('Day 22 — Solutions', t => {
t.is(getGameScore(input), 34664)
t.is(getGameScore(input, fightRecursive), 32018)
})
Overall, it was a lot of fun. Difficulty varied greatly from day to day which was pretty interesting, and besides day 20 which was an absolute nightmare, I enjoyed solving the daily puzzle.
What I particularly like about this event is that the amount of code to write is actually pretty low. Some challenges required barely more than a dozen lines. So it really is about solving problems more than writing code. My least favourite puzzles were the ones with a big code-to-solving ratio—that is, easy to solve, but hard or long to write.
My favourite ones were:
But I have to say what I enjoyed the most was browsing r/AdventOfCode and be amazed by the creativity of some participants. It truly is wonderful. :)
]]>There is a famous quote from Heydon Pickering that says:
Accessibility is not about doing more work but about doing the right work.
Indeed, when considered early on, accessibility does not necessarily equate doing more work. In fact, it is having to rebuild things further down the road because they cannot be made accessible which is costly. So I’d say that is a good point to make: by being more considered from the start, we can already get a very long way.
Then, very simply put, increasing access to software and products means enlarging their audience and reach. It’s not rocket science: if more people have access to something, more people get to use it. As of writing, the W3C reports one person out of 7 with a disability (whether it is visible or not), which means about a billion users. Additionally, it’s good to remember that everyone benefits from more usable and accessible content, beyond disabled persons.
If this still doesn’t do the trick, I guess the last argument to make is the legal one. Many if not most countries have pretty strict equal access laws and regulations in place (whether they are enforced is up for debate)—the Europe has the Web and Mobile Accessibility Directive and the United States have Section 508 and the American with Disabilities Act amongst other policies. You can find a comprehensive list of accessibility laws and policies on the W3C website.
As a prime example, in 2019 a blind man named Guillermo Robles sued Domino’s Pizza—an American company—for not being able to order pizza online because the website was not usable with a screen-reader. What made this court case special is that Domino’s doubled-down trying to find a loophole why they did not have to comply with the ADA instead of investing what is estimated to be €40,000 worth of work to make their website accessible to screen-readers. The US Court ruled that Domino’s Pizza had, in fact, to comply with accessibility regulations in place and is not exempt from providing equal access to all.
Since then, the amount of accessibility lawsuits has been on the rise, especially in the US where the regulations in place apply to the private sector as well. A lawsuit on the ground of accessibility (or lack thereof) can be costly and time-consuming for companies.
Making the case for accessibility might require bringing all these points to some extent. From my personal experience at N26, we didn’t have much buy-in from our product managers to begin with but still went out of our way to make things as accessible and inclusive as we could. Gradually, we showed that making things right wasn’t necessarily slower or more difficult which helped with raising awareness and having more disciplines considering the topic as part of their role. Eventually, prior launching in the US, N26 wanted to make sure the company was not at risk of a lawsuit. So the case evolved with time: it started because we wanted to do the right thing, became more normal in the product organisation and then got backed up by the company itself from a legal standpoint.
Alright friends, that’s the end of our #A11yAdvent calendar! Thank you very much for having followed along during the entire month, and I hope you learnt a thing or two. If you have any questions or comments, please be sure to get in touch on Twitter—I’m happy to chat. In the meantime, I wish you a pleasant end of the year! 🌟
]]>Vocal interfaces can be tremendously useful. They enable people who cannot necessarily physically interact with a device to be able to. Over the last few years, there has been dozens of inspiring stories of people having gotten out of difficult situations thanks to being able to quickly interact with Siri, Alex or Cortana.
Nevertheless, it is important to remember that not everyone can benefit from oral interfaces the same way—starting with mute people for whom this is not an option. So the first thing to consider when designing software which is controled through voice commands is that it should not be the only way. The same way soundtracks need captions, oral interfaces need physical alternatives.
Besides people without the ability to speak, people who stutter can also considerably struggle emitting voice commands in the adequate frequence. In her piece Stuttering in the Age of Alexa, Rachel G. Goss says:
Because I don’t speak in the standard cadence virtual assistants like Alexa have been taught to recognize, I know this will happen every time, giving me pangs of anxiety even before I open my mouth.
Me: Alexa, set timer for f-f-f-f…
Alexa: Timer for how long?
— …f-f-f-f-fifteen minutes
— Timer for how long?
— F-f-f-f-f-f-f-f-f…
— [cancellation noise]
Of course, Alexa—or any other voice assistant—is not doing it on purpose. It’s nothing but a program. It simply has not been trained for stuttering speeches. The same way facial recognition software produces racist outcomes because it is predominently trained on white faces, the “algorithm” is not the problem. Lack of diversity and ethics in the tech industry is.
A good way to accommodate people with a stutter is to make the voice trigger (such as “Ok Google”) customisable. Some sounds are more difficult to produce than others, and if the main command starts with such sound, it can make using the technology very stressful. In the great piece Why voice assistants don’t understand people who stutter, Pedro Pena III says about Google Assistant, Alexa and Siri:
“[I] don’t think I can do it with all the g’s, the a’s, the s’s. They need to start with letters I can actually say.”
Besides people who stutter, people born deaf often have a different speech than those having being used to hearing voices since childhood. These speech differences, and even non-native accents, are usually not accounted for in voice interface design, which can be exclusive and further alienating.
]]>As we make sites and applications more and more interactive however, accessibility sometimes suffer. Basically, anything that needs to be developed by hand because it is not natively supported by the web platform is at risk of causing accessibility issues down the line. Whether it is because of designers’ lack of awareness, or developers’ shotcoming in face of a difficult technical challenge.
When adding interaction to a page that goes beyond links and forms, we have to be cautious and proceed carefully. First of all, is the solution really the best one or is there something simpler and more straightforward? Interactive widgets such as tabs, dialogs and toggles come at a cost: usability, clarity and performance.
If you must though, rely on battle-tested implementations instead of rolling your own. While a dialog might seem as simple as displaying an element on top of the page, there is actually a lot of work going on there, and unless you’ve read the specifications or are well aware of the intricacies of such widget, you are most likely going to implement it incorrectly.
Here is a collection of unflavoured JavaScript suggestions if you must implement an interactive widget:
If you don’t mind something a bit more rough around the edges, you could check the WCAG Authoring Guidelines which have an entire sections dedicated to understanding the expectations of interactive widgets. Scott O’Hara also maintains accessible components on GitHub. Once again, avoid rolling your own implementation if you can, and use an accessible solution instead.
]]>Therefore, it is important to acknowledge that not everything can be automated. In fact, only a few things can be automated in the grand scheme of things. Basically, the HTML (and to some extent the CSS) can be audited to see if there are any sort immediately appearing markup issues.
Testing the HTML can be done with a variety of tools:
This is the perfect time and place to remind or let you know that accessiBe, the supposedly #1 fully automated accessibility solution” is a scam. It feeds on companies believing they can solve all their accessibility concerns by implementing a 1-line JavaScript widget. They cannot. Do not fall for it.
For copy-writing and content, I can recommend:
Low hanging-fruits which can be performed to test things:
For more all-around testing, there are pretty handy checklists:
For professional audits conducted by accessibility experts, I can recommend:
I am definitely forgetting a lot of tools here—this is just the tip of the iceberg. At the end of the day, ensuring proper accessibility and inclusivity in our products has to be done by combining various tools, methogolodies and manual work. There won’t be a one-size-fit-all testing solution. Feel free to share your favourite tools on Twitter with the #A11yAdvent hashtag!
]]>Generally speaking, being straightforward and unambiguous is the best way to avoid people being uncomfortable or confused. Figures of speech and idioms should be used sporadically as they can be difficult to grasp, especially for non-native speakers. Similarly, acronyms and abbreviations should be defined and used consistently.
The navigation, and generally speaking any sort of action, should be consistent across an entire platform. Main landmarks should not be moving around depending on the page, and a given action should be the same across pages.
Regarding the tone, be mindful of being too formal or too casual. Over recent years, more and more software companies have taken a more casual and friendly approach with their communication, but it can be found a little childish or patronising. Not everyone wants a lighthearted relationship with their bank or insurance provider. Humour is subjective and can be delicate to do well.
Similarly, error messages should be descriptive enough to understand what went wrong. Think twice before validating the wrong thing, and playfully shaming the user for making mistakes (such as “Woopsi-doopsie your name needs more than a character, silly!”). Be clear and succint with the expectations so the issue can be addressed.
Copy-writing is a skill and issuing consistent and clear interfaces and content across an entire website or application takes time and dedication. It’s also never perfect and needs to be refined regularly.
]]>A few years back, Harry Roberts mentioned an anecdote in one of his talks where he got the opportunity to ask a developer from Nepal whether his website was fast enough. Here is the transcript:
I said, “[W]hilst I have your attention, my analytics told me that Nepal is a problem region for my website. Nepal is apparently a very slow area to visit my website from. Is that true?”
His reply almost knocked me out. He said, “No, no, I don’t think so. I click on your site and it loads within a minute,” and that doesn’t feel slow, right? Imagine a minute load time not feeling slow.
Here in the middle of Germany, if we experienced a one-minute load time, we’d assume the site was down. We’d assume they were having an outage, and we’d probably go elsewhere.
Nepal, like many regions of the world, suffers from what most of us would consider poor connectivity. A website as optimised as Harry’s takes almost a minute to load. Harry continues:
[M]y site is incredibly highly optimized. It has to be. It’s my job to sell fast websites. If you’re visiting my site from, say, Dublin or West Coast USA, it would be fully loaded, fully rendered within 1.3 seconds. The exact same website on the exact same hosting on the exact same code base takes a minute for this person, over 45 times slower just because of where he lives. That’s the geographic penalty, the geographic tax that a lot of people in the world have to pay.
Performance is a critical topic, for many reasons. In e-commerce, any extra tenth of a second to display a page can have massive cost repercussions. Besides economical ramifications, ensuring sites are fast is important so that people from any region of the world can access them—regardless of bandwidth and internet speed. There is so much to discuss on the topic and this article should not become a guide on frontend performance (nor would I be able to write such guide anyway).
One specific thing I would like to mention though: icon fonts are notoriously bad for a variety of reasons—one of them being that they do not render until the font and styles have been fully downloaded, parsed and evaluated. When iconography is used as main content (such as in links and buttons), using an icon font might mean a broken and inaccessible interface until the font eventually shows up. The font could also fail to load, or be overwritten entirely by custom styles, leaving the UI in an awkward and possibly unusable state.
If you are interested in frontend performance and would like to ramp up your skills, I cannot recommend this series by Harry Roberts enough—definitely worth the money and a goldmine of information.
]]>Localisation and internationalisation (sometimes shortened l10n and i18n respectively) are broad topics requiring a lot of knowledge to do well. Large companies tend to have teams dedicated to internationalisation and the proper localisation of content. It takes time and effort.
Nevertheless, we can outline a certain amount of advice and things to consider to make sure the content is properly localised:
The html
element should have a lang
attribute (as in ISO2 code, e.g. lang="en"
). Besides being indexed by search engines, it is used by screen-readers to use the appropriate language profile using the correct accent and pronunciation. Elements containing text in another language should be equally marked as such, for the same reason. If a page in English contains a sentence in Arabic, the DOM element containing that text should be marked as lang="ar"
.
Links pointing to a resource in another language than the one the page is displayed in should be marked with the hreflang
attribute. For instance, if this page were to link to a page in Russian, the link would need to be marked with hreflang="ru"
.
Flags should be used to represent countries, not languages. When listing languages, refrain from illustrating them with flags if possible. If flags are important for visual identity, consider reversing the logic so countries are listed (with their flag), followed by a language (for instance “🇨🇦 Canada (English)” and “🇨🇦 Canada (Français Canadien)”).
Flags should exclusively be used to represent countries, not languages. For instance, while French is mainly spoke in France, it is also spoken in Congo and Canada—among other territorial entities. Or Spanish, which is spoken all over South America, but too often represented with a Spain flag. Flags are for countries, not languages.
Dates and currencies should ideally be authored in the format conveyed by the language of the page. For instance, a document authored in American English should use the American date format MM-DD-YYYY
, when a page in German should use the German one DD.MM.YYYY
. Content in French should author currencies the French way such as “42 €” with a space between the amount and the symbol, which lives after the amount. The Intl native API and libraries like Luxon and accounting.js can help with this process.
Be mindful of bias when designing interfaces and the systems supporting them. For instance, having one first name and one last name is quite an occidental structure. All around the world, people have many names, middle names, initials, no first name, no last name, names with a single character… If you have never read Falsehoods Programmers Believe About Names, I cannot recommend it enough.
Internationalisation is hard to do well. Mistakes will be made, and it’s never going to be perfect. It’s a matter of iterating on it and doing better. In a world as connected as ours, we as organisations providing content and products, need to put aside our bias and design systems which fit everyone, whoever they are, and wherever they come from.
]]>Anxiety disorders are shockingly common too. The most recent numbers I could find from the Anxiety and Depression Association of America estimate that almost one out of 5 American adults (18%) suffers from some form of anxiety. That means it is something we ought to keep in mind when building digital interfaces and experiences.
Ultimately, it is difficult to figure out what people will feel uncomfortable with, but there are generic advice we can give to make things more pleasant for everyone—especially people suffering from anxiety:
Remove the notion of urgency. The idea that something is only available for a short amount of time is one of the main causes of anxiety among users. By removing this notion altogether, we can make things less stressful. For instance, if a two-factor authentication code is only valid for 1 minute, it might not be necessary to display a timer counting down. Worst case scenario, the user missed the mark and will ask for another code.
Focus on clarity. The more straightforward the interface and its content, the less stressful it is. Avoid double-negatives and reversed checkboxes and be consistent with phrases and terminology. Stay away from scaremongering like dramatising non-critical actions (such as not wanting to benefit from a promotion), or shaming users for performing something (such as opting out from a newsletter).
Provide reassurance. Any sensitive action should be marked as such (like placing an order, or deleting an entry), and it should be clear whether there will be an opportunity to review before confirming. The ability to undo actions is also helpful to know that mistakes can be made and recovered from.
Ultimately, a lot of the work in that regard is about deeply caring for users and staying away from aggressive marketing tactics which are heavily relying on inducing anxiety. As a further read, I highly recommend reading A web of Anxiety by David Swallow from the Pacellio Group which goes more in details.
]]>Animations can also be overused or misused. For most people, no big deal, but certain persons can react poorly to moving content. It can range from frustration to motion sickness (known as vestibular disorder—which is shockingly common by the way), to more critical outcomes like seizures. So it’s important to use animations responsibly.
A relatively low hanging-fruit is to respect the prefers-reduced-motion
media query when animating content on screen. Note that I use “animating” as a blanket word to cover animations and transitions alike. I wrote about building a reduced-motion mode in the past and would recommend reading the article to get the full picture.
Another easy way to improve the experience of people being uncomfortable with animations is to wait for user interactions to trigger them. Animations are a very effective tool when used subtly and as the result of an action. That means no autoplaying videos or carousels. If they are starting automatically, provide quick controls to pause movement. The WCAG are pretty clear about this in Success Criterion 2.2.2 Pause, Stop, Hide:
For any moving, blinking or scrolling information that (1) starts automatically, (2) lasts more than 5 seconds, and (3) is presented in parallel with other content, there is a mechanism for the user to pause, stop, or hide it unless the movement, blinking, or scrolling is part of an activity where it is essential.
The last very important point to pay attention to with highly animated content—whether it is automatic or as the result of a user action—is to avoid excessive flashes as they can cause seizures. The general rule of thumb is to avoid more than 3 flashes within one second. The details of the flashing rule are outlined more in depth in the WCAG.
For a more comprehensive look at using animations responsibly and with accessibility in mind, I cannot recommend enough Accessible Web Animations by Val Head.
]]>This seems like the perfect opportunity to point out that jokingly using the word “triggered” to mean “being bothered by something” can be considered quite inappropriate and ableist. PTSD triggers are a real thing, which can have dire consequences. It is considerate not to dismiss and minimise the difficult of such experience by misusing the term to describe it. Possible alternative: “grinds one’s gears” or “bothers”.
At the core of content warnings, there is the need to acknowledge that every individual is different, and what might not be a sensitive topic to you might in fact be very difficult to approach for someone else. Trigger warnings are essentially an empathetic feature, and they need to be designed with an open mind.
Of course, it is not possible to account for every potential trigger. Everybody is different and sensitive to a variety of different topics and situations. Nevertheless, there are commonly accepted lists of triggers (such as sexual violence, oppressive language, representation of self-harm…).
Regarding the implementation, it could be as simple as a paragraph at the top of the main section mentioning the potentially sensitive topics. For instance:
Trigger warnings: Explicit Sex Scene, Self-Harm, Transphobia
This is a pretty basic but effective approach. It could be enhanced with more information about trauma triggers, link(s) to mental health websites, and even a way to complement or update the list.
Trigger warnings: Explicit Sex Scene, Self-Harm, Transphobia
What are trigger warnings? · Get help with PTSD ·Suggest different warnings
For audio and video content, it could be announced and/or shown at the beginning of the track. For imagery, it could be overlayed on top of the image, requiring a user interaction to effectively display the media. This is the approach taken by many social media such as Twitter.
This could even be considered a customisable user setting on a given platform. For instance, as a user I could mark transphobia and self-harm as sensitive topics for me, but consider nudity and sexuality okay. This way, the site (and its algorithms) can not only tailor the content that it shows me based on my content preferences, but also save me discomfort and potential triggers.
]]>For hard-of-hearing and deaf people of course, but also for people for whom processing audio might not be possible (such as those without headphones in a loud environment) or overwhelming (which can be the case for people on the autistic spectrum). They are also very handy for non-native speakers for whom understanding content might be easier when seeing it written rather than just spoken out.
It turns out that authoring good captions is actually surprisingly difficult, and the quality from source to source greatly varies. Here is a collection of tips to make captions as useful as possible:
Captions should usually live in the safe area of a 16:9 screen resolution, at the bottom of the screen. They might be temporarily moved when obscuring relevant visual content such as embedded text.
Captions are meant to be read, and therefore their size matter. They should be big enough to be readable at most distances but not too big that they would need to be refreshed too often.
Like for any text content, contrast is key. The ideal colors for captions on a solid dark background are white, yellow and cyan. Colors can also be used to denote different speakers within a conversation, which can really help understanding.
The length of captions should be kept short (~40 characters) and the text should not stick to the sides since differences in screen calibrating could cut the edges off. A caption should usually not exceed 2, maybe 3 lines.
Captions should be displayed for 1 or 2 seconds and changes of captions should come with a brief (200—300ms) uncaptioned pause to make sure the reader can acknowledge a change of text even when lines look alike (length, etc.).
Language-specific typographic rules should be respected. Words should be broken where possible according to the language they are depicted in, and sentences should be split on punctuation as much as possible.
Special care can be taken to make sure not to spoil upcoming events before they appear on screen. Nothing like knowing what happens before it actually does because the caption was too revealing.
Important sound effects and subtility (such as tone, emotions, loudness, music…) should be explicitly mentioned. Same thing if the sound/dialogue comes from something off-screen.
As you can see, there are a lot of things to consider to make captions accessible. Some content might be easier to caption that others (single speaker, few editorial cuts, no sound effects or music…). The more attention is devoted to captions, the more accessible the content becomes. It is particularly critical when the main content of a given page or product is provided through videos (movie, series, screencasts…).
]]>Let alone the fact that it would be nice being able to zoom in native applications sometimes, disabling zooming on the web is a big problem. It is the digital equivalent of saying “I personally like taking the stairs, so let’s remove the lift to gain some space.” It’s selfish and ill-guided.
Vision deficiencies are among the most common disabilities, and an awful lot of people need visual correction (through glasses or lenses) to see properly. Even if only a small portion of them actively need to zoom in, this means many people rely on this feature to browse the web conveniently.
Taking myself as an example. I have terrible eyes. I am very short-sighted and have a lazy-eye I can somewhat control thanks to years of orthoptics. But my poor vision is making me tired, which causes my lazy-eye to act up. And my lazy-eye acting up makes my eyes more tired. Which means my vision is not great. That’s why I zoom most sites between 100% and 150%.
So the takeaway is: do not disable zooming.
Additionally, I use the pinch-and-zoom trackpad gesture from my MacBook Pro on every single website. Every time I want to read something, I pinch to zoom to dramatically enlarge the content, read, then pinch out to scroll or navigate. Rince and repeat.
I’m fortunate that macOS provides this out of the box. Some people rely on assistive technologies for a similar feature. Note that screen magnifying techniques are ten times more common than usage of screen-readers, so it’s not an edge case that can easily be omitted. AxessLab has a good post about considering screen magnifiers.
]]>While doing research for this article, I learnt about the difference between legibility and readability. The former is the product of the design of a font, based on its characteristics such as height, width, and thickness. Readability on the other hand is related to how the font is used, such as font size, letter spacing, line height and color.
The first thing to remember when it comes to readability is that there is no one-size-fit-all solution. While there are commonly accepted suggestions such as avoiding small sizes and enabling decent color contrast, it is good to remember that everyone is different and what works for me might not work for you.
As an example, a couple years back a person came to me after my talk on accessibility and told me that my advice about having super sharp contrast for body text was not always working for them, a dyslexic person who prefers something a little more toned down. Along the same lines, some people might find serif fonts easier to read, and some not.
Let’s walk through the things one can do to improve readability for most:
ch
CSS unit. Similarly, limit paragraphs length to 70 to 90 words to make them easier to read.As an example, this blog on desktop uses a 22.4px font size and 33.6px line height (1.5 ratio). The content is left-aligned, and lines are about 85 characters long in paragraphs that are around 95 words on average. The text color is #444 on top of plain white, which has a contrast ratio of ~9.73—enough for any size of text.
You might have noticed I do not give any recommendation as to which font to choose. Besides being a design choice in many aspects, the thing is most properly-designed professional fonts will do just fine provided they are not cursive and exotic. It’s also good to remember a lot of people override the fonts in their browser with one they can conveniently read (Comic Sans is found to be a great typeface by some dyslexic people for instance).
]]>As Léonie Watson explains in her article about accessible emojis, emojis are still not very accessible to screen-readers unfortunately, and tend to be poorly or completely undescribed to their users. They are not reported as images in the accessibility tree, and they are not always assigned an accessible name. These are the 2 things to fix.
The role="img"
attribute can be set to assign imagery semantics to a DOM node. The accessible name can be defined with the aria-label
attribute. For instance:
<span role="img" aria-label="Sparkly pink heart">💖</span>
That’s the strict minimum to make emojis perceivable to all. In his article about accessible emojis, Adrian Roselli expands on Léonie’s solution to include a small tooltip to display the emoji name as well which is a nice touch.
Of course, most web pages are not coded manually, which means the label will have to be dynamically inserted when an emoji is found. Programmatically finding emojis is just a regular-expression away so this is the easy part so to say.
Assigning the description programmatically is harder. It turns out there is no obvious way to retrieve the description for an emoji (also known as “CLDR short name”). Packages like emoji-short-name or emojis.json provide a comprehensive map for most emojis to access their English short name, so this could be a solution albeit it has its limits (lack of internationalisation, potential performance cost…).
]]>aria-disabled
and aria-describedby
attributes so it’s a good time to talk more about ARIA as a whole. It stands for Accessible Rich Internet Applications. It’s a specification aiming at enhancing HTML in order to convey more meaning and semantics to assistive technologies, such as screen-readers.
The first advice when it comes to ARIA is to avoid using it when possible. It is a powerful tool that can completely change the way a page or widget gets interpreted by assistive technologies, for good or for bad, so it needs to be used carefully. Generally speaking, prefer using native HTML when possible, and only use ARIA when HTML is not enough (such as for tabs or carousels).
There are a lot of handy guides on the internet on building accessible widgets with the help of ARIA—Inclusive Components by Heydon Pickering has to be one of my favourite.
One thing I would like to bring your attention to is the concept of “live” regions. A live region is an area of a page that announces its content to screen-readers as it gets updated. Consider a container for notifications (or snackbars, croutons or whatever yummy thing they are called) or a chat feed.
<div role="log" aria-live="polite">
<!-- Chat messages being inserted as they are sent -->
</div>
<div role="alert" aria-live="assertive">
<!-- Important notifications being inserted as they happen -->
</div>
A few things to know about live regions:
aria-live
attribute when the document loads. It cannot be dynamically inserted at a later point unfortunately.role
attribute is not mandatory, but recommended (role="region"
if no other role fits). Some roles (such as log
, status
or alert
) have an implicit aria-live
value, but it is recommended to specify the latter as well for maximum compatibility.polite
instead of assertive
as the latter interrupts ongoing diction to announce the new content, which should be reserved for critical announcements.off
as a value to tell the assistive technologies they no longer have to track changes in that container.In modern web design, it is not uncommon to have a link (or a button) that visually has no text, and is just an icon. Think about social icons, or items in a compact navbar. Relying solely on iconography can be tricky, but it can work, especially when icons are clear and well known.
Yet, even if no text is technically displayed, it is important to provide alternative content for people using screen-readers. It turns out making an accessible icon link is not that straightforward and I thought it would deserve its own little article.
As an example, let’s consider a Twitter icon link using the iconic bird. We will use SVG for the icon itself since it’s a scalar format that does not require an additional HTTP request.
<svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 16 16">
<path
d="M16 3.538a6.461 6.461 0 0 1-1.884.516 3.301 3.301 0 0 0 1.444-1.816 6.607 6.607 0 0 1-2.084.797 3.28 3.28 0 0 0-2.397-1.034 3.28 3.28 0 0 0-3.197 4.028 9.321 9.321 0 0 1-6.766-3.431 3.284 3.284 0 0 0 1.015 4.381A3.301 3.301 0 0 1 .643 6.57v.041A3.283 3.283 0 0 0 3.277 9.83a3.291 3.291 0 0 1-1.485.057 3.293 3.293 0 0 0 3.066 2.281 6.586 6.586 0 0 1-4.862 1.359 9.286 9.286 0 0 0 5.034 1.475c6.037 0 9.341-5.003 9.341-9.341 0-.144-.003-.284-.009-.425a6.59 6.59 0 0 0 1.637-1.697z"
/>
</svg>
Now, let’s start by wrapping it up with a link:
<!-- Incomplete: please do *not* copy and paste this snippet -->
<a href="https://twitter.com/KittyGiraudel">
<svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 16 16">…</svg>
</a>
Unfortunately, at this stage this link contains no accessible name, which is a big problem. Let’s add some descriptive text, that we make visually hidden yet accessible.
<!-- Incomplete: please do *not* copy and paste this snippet -->
<a href="https://twitter.com/KittyGiraudel">
<svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 16 16">…</svg>
<span class="sr-only">Twitter</span>
</a>
Chris Heilmann asked me whether using the aria-label
attribute or a <title>
element in the SVG would be simpler than having a visually hidden element. The latter solution provides better support with older assistive technologies and avoids aria-label
internationalisation issues.
There is still a bit more we need to do. Since we provided a descriptive text, we can safely remove the SVG markup from the accessibility tree by adding the aria-hidden
attribute.
<!-- Incomplete: please do *not* copy and paste this snippet -->
<a href="https://twitter.com/KittyGiraudel">
<svg
aria-hidden="true"
xmlns="http://www.w3.org/2000/svg"
viewbox="0 0 16 16"
>
…
</svg>
<span class="sr-only">Twitter</span>
</a>
Last but not least, svg
elements can be focused on Internet Explorer, which is becoming less and less of a problem overall—still, we should correct that with the focusable
attribute.
<a href="https://twitter.com/KittyGiraudel">
<svg
aria-hidden="true"
focusable="false"
xmlns="http://www.w3.org/2000/svg"
viewbox="0 0 16 16"
>
…
</svg>
<span class="sr-only">Twitter</span>
</a>
As a last touch, I would recommend adding the text content in the title
attribute on the link as well. This does not enhance accessibility per se, but it emits a small tooltip when hovering the link, which can be handy for non-obvious iconography.
<a href="https://twitter.com/KittyGiraudel" title="Twitter">
<svg
aria-hidden="true"
focusable="false"
xmlns="http://www.w3.org/2000/svg"
viewbox="0 0 16 16"
>
…
</svg>
<span class="sr-only">Twitter</span>
</a>
Our final link (with some additional styles to make it easier on the eye): Twitter
Now that we have sorted out how to make our icon links accessible, we can safely make a little React component for that (out of sight, out of mind), using a <VisuallyHidden />
component.
const IconLink = ({ Icon, ...props }) => (
<a {...props}>
<Icon aria-hidden='true' focusable='false' />
<VisuallyHidden>{props.title}</VisuallyHidden>
</a>
)
Then it can be used like this:
const Twitter = props => (
<svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' {...props}>
<path d='M16 3.538a6.461 6.461 0 0 1-1.884.516 3.301 3.301 0 0 0 1.444-1.816 6.607 6.607 0 0 1-2.084.797 3.28 3.28 0 0 0-2.397-1.034 3.28 3.28 0 0 0-3.197 4.028 9.321 9.321 0 0 1-6.766-3.431 3.284 3.284 0 0 0 1.015 4.381A3.301 3.301 0 0 1 .643 6.57v.041A3.283 3.283 0 0 0 3.277 9.83a3.291 3.291 0 0 1-1.485.057 3.293 3.293 0 0 0 3.066 2.281 6.586 6.586 0 0 1-4.862 1.359 9.286 9.286 0 0 0 5.034 1.475c6.037 0 9.341-5.003 9.341-9.341 0-.144-.003-.284-.009-.425a6.59 6.59 0 0 0 1.637-1.697z' />
</svg>
)
const MyComponent = props => (
<IconLink
href='https://twitter.com/KittyGiraudel'
title='Twitter'
Icon={Twitter}
/>
)
Opening a link in a new tab also comes with accessibility considerations that should not be overlooked. I went over opening links in a new tab in another article. Although it was showcasing a React implementation, the knowledge should be easy to transfer.
Sara Soueidan went through accessible icon buttons on her blog, and shares interesting tips to debug the accessibility name in the browser developer tools.
Additionally, Florens Verschelde has great content about working with SVG icons, including composing, spriting, styling and rendering icons. Cannot recommended you enough to read it!
]]>Forms need to be built in a particular way so that everyone can use them efficiently. Whether you are a power-user navigating with the keyboard for speed, or a blind or short-sighted person using a screen-reader, forms can be tedious to fill and that’s why we need to pay a particular attention to them.
Let’s go through a little recap of what is important when building accessible forms.
All fields should be labeled, regardless of design considerations. Labels can always be visually hidden, but they have to be present in the DOM, and be correctly linked to their field through the for
/id
pair. Placeholders are not labels.
Additionally, labels should indicate the expected format if any, and whether the field is required or not. If all fields are required, an informative message can be issued at the top of the form to state so.
<label for="ssn">Social Security Number (xxx-xxx-xxx) (required)</label>
<input type="text" name="ssn" id="ssn" required />
Poor error reporting has to be one of the main issues regarding forms on the web. And to some extent, I can understand why as it’s not very intuitive.
First of all, errors should be reported per field instead of as a whole. Depending on the API in place, it’s not always possible unfortunately, and that’s why it’s important as frontend developers to be involved in API design as well.
A field should be mapped to its error container through an aria-describedby
/id
attribute pair. It is very important that the error container is always present in the DOM regardless of whether there are errors (and not dynamically inserted with JS), so the mapping can be done on the accessibility tree.
<label for="age">Age</label>
<input type="number" name="age" id="age" aria-describedby="age-errors" />
<div id="age-errors"></div>
After displaying an error, the focus should be moved to the relevant field. In case multiple errors were displayed, the focus should be moved to the first invalid field. This is why it is interesting to use HTML validation when possible, as this is all done out of the box.
Disabled buttons cannot be discovered by screen-readers as they are, so-to-say, removed from the accessibility tree. Therefore, the usage of disabled buttons can cause an issue when the sole button of a form is undiscoverable by assistive technologies.
To work around this problem, the submit button of a form should effectively never be disabled
and should trigger form validation when pressed.
The button can have aria-disabled="true"
to still be discoverable and indicate that it is technically not active (due to missing or invalid information for instance). CSS can be used to make a button with that ARIA attribute look like a disabled button to make it visually understandable as well.
There are some rare cases where having a fully disabled
button is acceptable:
Radio inputs with the same name should be grouped within a <fieldset>
element which has its own <legend>
. This is important so they can be cycled through with the arrow keys.
<fieldset>
<legend>How comfortable are you with #a11y?</legend>
<label for="very"> <input type="radio" name="a11y" id="very" /> Very </label>
<label for="so-so">
<input type="radio" name="a11y" id="so-so" /> So-so
</label>
<label for="not-at-all">
<input type="radio" name="a11y" id="not-at-all" /> Not at all
</label>
</fieldset>
]]>There are countless resources on the web about authoring good alternative texts to images, my favourite of all is this ultimate guide by Daniel Göransson, so I will just give a bit of a recap.
alt
attribute of an image.Finally, there are some cases where you can leave out the alternative text entirely, and leave the attribute empty (alt=""
):
That’s the main gist. Images are a critical part of the web—we have to appreciate that not everyone can perceive them the same way, and that’s why it’s critical to describe them properly.
]]>The thing we usually don’t think about is that many assistive technologies such as screen-readers have been initially authored with the “original web” in mind and rely on page (re)loads to announce the page context, namely the page title (hold by the <title>
element).
When building a SPA—no matter the framework—it is important to do some work to announce the title when following router links. Two things need to happen:
A nice solution is to have a visually hidden element at the top of the page which receives the new title when navigating, and move the focus on that element so the content is read. Ideally, the skip link lives right after that node so the flow goes like this:
Here is how our HTML should look like:
<body>
<p tabindex="-1" class="sr-only">…</p>
<a href="#main" class="sr-only sr-only--focusable">Skip to content</a>
<!-- Rest of the page -->
</body>
And our unflavoured JavaScript. Note that this is no specific framework—it’s just a made-up API to illustrate the concept.
const titleHandler = document.querySelector('body > p')
router.on('page:change', ({ title }) => {
// Render the title of the new page in the <p>
titleHandler.innerText = title
// Focus it—note that it *needs* `tabindex="-1"` to be focusable!
titleHandler.focus()
})
You can find a more in-depth tutorial for React with react-router
and react-helmet
on this blog. The core concept should be the same no matter the framework.
Note that if you have can guarantee there is always a relevant <h1>
element (independently of loading states, query errors and such), another possibly simpler solution would be to skip that hidden element altogether, and focus the <h1>
element instead (still with tabindex="-1"
).
In traditional websites using hyperlinks the right way, the page is fully reloaded when following a link and the focus is restored to the top of the page. When navigating with the keyboard, that means having to tab through the entire header, navigation, sometimes even sidebar before getting to accesss the main content. This is bad.
Single-page applications are not free from this consideration either. Following a link tends to reload the content area and therefore loses the current focus, which sends it to the top of the document, causing the same issue. So either way, there is work to do.
To work around the problem, a common design pattern is to implement a skip link, which is an anchor link sending to the main content area. So how shall our skip link work?
Here is our HTML:
<body>
<a href="#main" class="sr-only sr-only--focusable">Skip to content</a>
</body>
For the styling we can use what we learnt in day 3 of this calendar, applying a small twist to undo the hiding styles when the element is focused.
.sr-only.sr-only--focusable:focus,
.sr-only.sr-only--focusable:active {
clip: auto !important;
-webkit-clip-path: none !important;
clip-path: none !important;
height: auto !important;
overflow: visible !important;
width: auto !important;
white-space: normal !important;
}
You can play with a live demo for skip links on CodePen.
]]>In this article for instance, the post title is a <h1>
and then we have a bunch of <h2>
. If any of these end up needing sub-sectioning, there would be <h3>
and so on and so forth. The outline looks like this (as of writing):
1. A11y Advent Calendar
1.1. Day 1: What is Accessibility?
1.2. Day 2: Evaluating Accessibility
1.3. Day 3: Hiding Content
1.4. Day 4: Self-Explanatory Links
1.5. Day 5. Document Outline
To check the structure of a document, we can use the same accessibility bookmarklet or extension we’ve mentioned yesterday. When activating it, one of the options is “Headings”, which lists all headings in order and level. From there, we can make sure the structure makes sense, headings are in the right order, and no level is skipped.
For years now, there have been discussions (and even proposals) about taking into consideration sectioning elements like section
into the document outline to create sort of sub-structures where every root would go back to h1
. This has never been implemented by any browser or supported by any assistive technology so this is basically moot at this point. Stick to appropriate heading levels.
For more information about the history behind the document outline and the proposed resolution algorithm, I encourage you to read the Document Outline Dilemna by Amelia Bellamy-Royds which is a fantastic overview of the topic.
That means it is important for links to be self-explanatory. In other words, a link should make sense on its own without the rest of the sentence it lives in or its visual surroundings. The content of the link should describe what the link does.
To have a look at what links look like in a given page, I would highly commend this accessibility bookmarklet or extension by Tobias Bengfort. Drag it onto your bookmark bar, then activate it on any page to be prompted with a little dialog which contains a dropdown menu offering 3 options: landmarks, headings and links. The last one is the relevant one in this case.
If you spot a link that does not make much sense on its own, revise its content until it does. No more “click here”, “learn more” or other non-sensical links! Similarly, avoid mentioning “link” in the text since most screen-readers already do that.
As an example, consider a link visually stating “Edit” in a list of items. It makes sense, because the link belongs to a list item, therefore it is implied that it is for that specific item. But when listing links or just tabbing through, all links end up saying “Edit”, which is not good at all. To fix that problem, we can apply what we learnt yesterday and add some visually hidden content to the link.
<a href="/edit/1234567890">
Edit <span class="sr-only">item [distinguishable item name]</span>
</a>
]]>As you might guess, most people browse the web by looking at it, and then tapping or clicking links and buttons to interact with it. This mode of consumption works because most people have a decent eyesight and can look at the page. That being said, some people (including but not limited to blind persons) rely on screen-readers to browse the web. These are softwares reading out loud the content of a page, and provided navigation mechanisms to browse web content without necessarily relying on visual input.
When using a screen-reader, one does not always benefit from the surrounding visual context. For instance, an icon might make sense on its own, but if someone cannot perceive the icon, then they might not understand the role of a button. This is why it is important to provide assistive text, even though it might be visually hidden.
One might think using display: none
or the hidden
attribute should be enough, but these techniques also remove the content from the accessibility tree and therefore make it inaccessible.
The quest for a combination of CSS declarations to visually hide an element while keeping it accessible to screen-readers is almost as old as the web, and gets refined every couple of years. The latest research to date on the matter has been conducted by Gaël Poupard in his CSS hide-and-seek article translated here. The consensus is that the following code snippet is enough to hide an element while making its content still available to assistive technologies:
.sr-only {
border: 0 !important;
clip: rect(1px, 1px, 1px, 1px) !important;
-webkit-clip-path: inset(50%) !important;
clip-path: inset(50%) !important;
height: 1px !important;
overflow: hidden !important;
margin: -1px !important;
padding: 0 !important;
position: absolute !important;
width: 1px !important;
white-space: nowrap !important;
}
What is important to think through is when to hide content entirely (with display: none
for instance), and when to hide it visually only. For instance, when providing additional information to an icon, it should be visually hidden since the point is to have it read by screen-readers. But when building tabs, or a content toggle, it should be hidden entirely, because there is an interaction required to access it.
In 2020, the content-visibility
CSS property made its apparition as a way to improve performance by hinting the browser (Chrome, as of writing) to skip rendering of a certain element until it is within the viewport. While it comes from a good place, it is not without shortcomings in terms of accessibility.
Indeed, content made hidden with content-visibility
will effectively be absent from the accessibility tree entirely (just like with display: none
) which can be quite an issue for things like landmarks, links or headings (see day 4 and 5 of this calendar). Therefore, reserve this CSS property for things which are neither landmarks nor headings or heading containers.
For more information about the impact of content-visibility
on content accessibility, I recommend Content-visibility and Accessible Semantics by Marcy Sutton and Short note on content-visibility: hidden by Steve Faulkner.
I had about 4.7Kb of CSS, and less than 1Kb of JavaScript, so I figured the HTTP requests weren’t that necessary at all and I could inject styles and scripts directly within the page to avoid HTTP roundtrips. Inlining CSS and inlining JavaScript is explained in the 11ty docs, so not really warrant of a blog post I hear you say.
Now the thing is not all styles are necessary on all pages. For instance, the home page have some components that do not exist anywhere else on the site, and an article page like this one has a lot of styles which are not needed anywhere else (code snippets, figures, tables, post date…). So instead of inlining 5Kb of CSS in the head, most of which would not be needed, I decided to split it across pages.
My CSS (formerly authored in Sass) is split by concern, somewhat following the 7-1 pattern (my JavaScript also follows a similar structure but I’m going to drop it from now on for sake of simplicity). That’s good because that mean I didn’t really have to figure out how to break it down—I only needed a way to include specific parts in specific contexts. Namely:
The implementation concept is relatively simple: in the <head>
of the document, include all core styles in a <style>
tag. And in specific layouts and pages, include specific stylesheets within a <style>
tag as well. No more <link rel="stylesheet">
and no more monolithic stylesheet with the entire site’s styles.
Now, including files can be done with the {% include %}
tag. From 11ty ≥0.9.0, it is possible to include relative paths so files do not have to live in the _includes
folder. That means we can keep a project structure like this (irrelevant parts omitted):
├── _includes/
└── assets/
├── css/
│ ├── base/
│ ├── components/
│ ├── layouts/
│ └── pages/
└── js/
Now, I wanted to minimise the amount of boilerplate needed to include some specific styles or script in a template, and making it easy to maintain. For instance, in my post.liquid
layout, I wanted to have this include at the top:
{% include "styles.html", partials: "
components/blockquote,
components/code,
components/figure,
components/footnotes,
components/post-date,
components/post-navigation,
components/table
" %}
So I came up with this small _includes/styles.html
Liquid partial:
{% if paths %}
{% assign paths = paths | split: "," %}
{% capture css %}
{% for path in paths %}
{% include "../assets/css/{{ path | strip }}.css" %}
{% endfor %}
{% endcapture %}
<style>{{ css }}</style>
{% endif %}
Alright, so there is quite a lot to unpack here. Here is the breakdown:
paths
argument was not provided, we do nothing.paths
from a string to an array by splitting it on commas.assets
folder while making sure to trim it with strip
. This is what allows us to have the paths
argument authored across multiple lines for clarity.css
variable containing all our relevant styles.<style>
tag.The script.html
partial works exactly the same way except it looks into assets/js
and renders a <script>
tag. I guess both partials could be abstracted into a single one, but I don’t think it’s particularly necessary.
When it comes to minification, there are a few approaches here. One way would be to have a cssmin
filter based on clean-css (or any other CSS minifier). Inside of the styles.html
partial, we’d apply | cssmin
to our CSS so it gets optimised.
I went a slightly different path and have an 11ty transform to minify HTML with html-minifier. The nice thing about it is that it offers a minifyCSS
and a minifyJS
option to compress styles and scripts authored in <style>
and <script>
tags respectively. Therefore I have a single transform to minify everything.
I decided to run that transform only in production because a) I don’t like to have compressed styles and scripts in development since it can make them harder to debug and b) minification is actually not cheap and can take a few seconds on a site as small as mine which means it would dramatically slow down compilation.
module.exports = function (config) {
if (process.env.NODE_ENV === 'production') {
config.addTransform('htmlmin', (content, path) =>
path.endsWith('.html')
? htmlmin.minify(content, { minifyCSS: true, minifyJS: true, })
: content
)
}
}
That’s about it, really. To sum up: no more HTTP requests for my styles and scripts, which improves performance by reducing the amount HTTP roundtrips. Of course, we no longer benefit from caching, but I believe the performance gain is worth it.
I hope this help! ✨
]]>The WCAG offer a dozen guidelines organised under the POUR principles, which stands for Perceivable, Operable, Understandable and Robust. Each guideline is testable through success criteria (a total of over 80 of these), each of them with 3 level of conformance: A, AA and AAA.
For instance, the success criterion related to sufficient color contrast looks like this:
Success Criterion 1.4.3 Contrast (Minimum)
(Level AA)
The visual presentation of text and images of text has a contrast ratio of at least 4.5:1, except for the following:
- Large Text: Large-scale text and images of large-scale text have a contrast ratio of at least 3:1.
- Incidental: Text or images of text that are part of an inactive user interface component, that are pure decoration, that are not visible to anyone, or that are part of a picture that contains significant other visual content, have no contrast requirement.
- Logotypes: Text that is part of a logo or brand name has no contrast requirement.
Generally speaking, reaching a conformance level of A is the strict minimum and required by law, and it is usually encouraged to go for AA. Now, it is not always possible to reach AAA on all success criteria depending on the site, so it’s a nice objective to aim for but shouldn’t be the end goal.
What is important to remember is that even beyond strict specification conformance, there are still a lot of things that can be done to improve accessibility. As we’ve seen yesterday, this is a broad—almost endless topic—so it should never considered done per se and can be actively worked on at all time.
Interestingly enough, the WCAG also apply to mobile interfaces. There is no other significant body of work covering mobile accessibility, so the WCAG can and should be followed (when applicable) for mobile applications, even though they are not written for web technologies.
]]>Nicolas Hoizey pointed out on Twitter that markdown-it-footnote does essentially the same thing with less integration and using Markdown syntax instead of Liquid.
Maybe, but my main problem with it is that it’s not super accessible (let alone by default), even considering all the customisation options. That’s because the footnote references end up being numbers (e.g. [1]) which are meaningless when listed or tabbed through because devoid of their surrounding context.
I have recently blogged about accessible footnotes again and if you haven’t read the article yet, I recommend you do so you fully grasp what comes next. To put things simply, we need 2 things: a way to register a footnote reference within the text, and a way to display the footnotes for a given page at the bottom of a post. Let’s start with the first one.
To author a footnote within text content, we use a footnoteref
Liquid tag which takes the footnote identifier and the footnote content as arguments (in that order). It looks like this:
Something about {% footnoteref "css-counters" "CSS counters are, in
essence, variables maintained by CSS whose values may be
incremented by CSS rules to track how many times they’re used." %}
CSS counters{% endfootnoteref %} that deserves a footnote explaining
what they are.
The 11ty configuration would be authored like this:
const FOOTNOTE_MAP = []
config.addPairedShortcode(
'footnoteref',
function footnoteref (content, id, description) {
const key = this.page.inputPath
const footnote = { id, description }
FOOTNOTE_MAP[key] = FOOTNOTE_MAP[key] || {}
FOOTNOTE_MAP[key][id] = footnote
return `<a href="#${id}-note" id="${id}-ref" aria-describedby="footnotes-label" role="doc-noteref" class="Footnotes__ref">${content}</a>`
}
)
Here is how it works: when rendering the footnoteref
Liquid tag, we retrieve the registered footnotes for the current page (if any) from the FOOTNOTE_MAP
map. We add the newly registered footnote to it, and we render an anchor link to the footnote.
It is important not to use an arrow function but a function declaration since we need to access the page stored on the this
context. The ability to access page data values within shortcode definitions comes from 11ty.
For that I created a footnotes.html
partial which I render at the bottom of the post
layout (passing it the current page object), like so:
<article>
{{ content }}
{% include "components/footnotes.html", page: page %}
</article>
Now, we need a way to retrieve the footnotes from the page. That’s actually not too easy in Liquid unfortunately since there is no way to inject a global variable or simply assign a function call to a variable. Liquid’s utilities mostly aim at rendering HTML (as shown above) so it’s not too straightforward to return an array.
I played around a few solutions, and eventually landed with a wacky filter. Basically I expose a footnotes
filter which expects the page as argument, and returns the footnotes for that page.
{% assign footnotes = '' | footnotes: page %}
This is pretty ugly. We need a value to be able to apply a filter, even though that value can be anything since the filter will just replace it with an array of footnotes.
Note that this hack is rendered moot by the plugin since it exposes a footnotes
shortcode which does the full HTML rendering. Therefore, there is no need to access the array of footnotes in the template as it’s all done from within the plugin.
Here is how it’s defined:
config.addFilter(
'footnotes',
// The first argument is the value the filter is applied to,
// which is irrelevant here.
(_, page) => Object.values(FOOTNOTES_MAP[page.inputPath] || {})
)
From there, we can render the necessary markup to output the footnotes using a for loop to iterate over each of them.
{% assign footnotes = '' | footnotes: page %}
{% assign count = footnotes | size %}
{% if count > 0 %}
<footer role="doc-endnotes">
<h2 id="footnotes-label">Footnotes</h2>
<ol>
{% for footnote in footnotes %}
<li id="{{ footnote.id }}-note">
{{ footnote.description | markdown }}
<a
href="#{{ footnote.id }}-ref"
aria-label="Back to reference {{ forloop.index }}"
role="doc-backlink"
>↩</a
>
</li>
{% endfor %}
</ol>
</footer>
{% endif %}
So to sum up:
footnoteref
Liquid tag to wrap footnote references in the text. It takes an id and the footnote description as arguments, and renders an anchor to the correct footnote.footnotes
Liquid filter which is basically a hacky way to get the footnotes for a given page so it can be assigned onto a variable. This hack is solved by using the plugin.footnotes.html
Liquid partial which get the footnotes for the current page and render them within the appropriate DOM structure. The plugin exposes a footnotes
shortcode does that.That’s about it. Pretty cool, huh? ✨
If you are interested in using these footnotes in 11ty, check out eleventy-plugin-footnotes on GitHub. There are install instructions, guidelines and examples.
]]>The idea behind accessibility is to provide equal access to content to everyone, regardless of who they are or how they browse the web. Indeed, universal access to information and communication technologies is considered a basic human right by the United Nations.
In other words, providing accessible interfaces and content is about considering everyone, regardless of their abilities or disabilities, or the context in which they access content. Practically speaking, we can draw 5 large categories of impairments:
Visual: this ranges from poor eyesight, to colour-blindness, from cloudiness to complete blindness, from fatigue to cataract. The web being a platform primarily consumed with the eyes, a lot of technology improvements have been made in that regard, and that is why accessibility is sometimes thought to be solely about accommodating towards blind users.
Motor: motor impairments, when it comes to the web, are usually considering solely upper-limbs disabilities, so nothing below the belt. There are a wide range of reasons for someone to have limited mobility, such as tendonitis, carpal tunnel syndrom, arthritis broken hand or arm, skin condition, hand tremor, Parkinson disease or more commonly, having only one hand free.
Cognitive: cognitive impairments is a broad and practically endless category because brains are complicated pieces of machinery and everyone is different. Some example could include dyslexia, post-traumatic stress disorder (PTSD), attention deficit and hyperactivity disorder (ADHA), amnesia, insomnia, vestibular disorder (motion sickness), anxiety, dementia…
Auditive: while not originally too considered when the web was designed as an essentially all-text media, auditive impairments are more relevant than ever in this day and age where a lot of content is provided through videos and podcasts. They include but are not limited to being hard-of-hearing (HoH), in a loud environment or completely deaf.
Vocal: vocal impediments range from benign (and sometimes temporary) situations such as having soar throat or a foreign accent, to more serious conditions like stutter or mutism. Because the web is seldom interacted with solely through oral interfaces, this category tends to be left out.
As you can see, there are so many things to consider. It may be daunting, but it’s also the beauty of our job as designers and frontend developers. We get to work for everyone. I don’t know about you, but I find it inspiring. ✨
]]>I will also announce the daily tip on Twitter with the #A11yAdvent hashtag. Feel free to share your opinion and tips under that hashtag as well!
The calendar was originally entirely on this page, but I have decided to break it down in more digestible pieces under their own page. Given the link you just followed, it looks like you’re looking for: .
I guess I wanted to try 11ty since it’s all the cool kids talk about nowadays. Additionally, it feels nice leaving Ruby behind because that’s a pain to deal with as far as I’m concerned. 11ty is built on Node.js, which is more up my alley.
Paul Lloyd wrote a very good article on migrating from Jekyll. So did Steve Stedman. And Alex Pearce. And probably other smart people. I’d like to add my own contribution to the growing collection of articles about coming from Jekyll.
I’m mostly going to expand on things that took me a while to figure out, hoping to help other poor souls lost in their journey. Find a short table of contents below:
Overall, the migration was relatively smooth. It took me about 10 hours spread across a week-end, so I consider it an affordable amount of time for what is essentially changing build systems.
Here are some things I do like a lot from 11ty:
And some of the things I was either a little frustrated or not super happy with:
include
, page
or site
objects. Here everything sort of blends together in an opaque way.That being said, I am overall pleased with the migration and the tool as a whole. Interesting thing to point out is that the compilation didn’t get much faster for me: both systems take about 2 seconds to compile hundreds of pages.
Anyway, without further ado let’s dive in.
I have about 300 articles on this blog, so there was no way I would do anything manually. Even an automated script would have been a pain, so I was really looking forward to preserving everything about the blog as is through the configuration only. I started by configuring a custom collection for posts:
config.addCollection('posts', collection =>
collection.getFilteredByGlob('_posts/*.md')
.sort((a, b) => b.date - a.date)
)
I use this collection in multiple places: in the blog, but also on the home page to list the most recent articles as well as in the RSS feed. I figured it was easier to sort the collection once in the configuration rather than everywhere I look up collections.posts
since 11ty sorts it chronologically by default.
Now, Jekyll being a blogging system at the core, it treats posts as first-class citizens and expects an article’s date to be in its slug—for instance 2020-11-30-from-jekyll-to-11ty.md
would then be compiled into /2020/11/30/from-jekyll-to-11ty/index.html
.
In its documentation, 11ty explains pretty extensively how to handle permalinks, but not really how to define a permalink pattern for an entire collection. It took me a while to figure out that I needed to create a _posts.json
file in the _posts
directory with the following JSON:
{
"layout": "post",
"permalink": "/{{ page.date | date: '%Y/%m/%d' }}/{{ page.fileSlug }}/"
}
This way, every article has its permalink defined based on its file name, and it is not necessary to manually author the permalink
property in every single post. Same thing for the layout
property.
I do not provide an anchor for every single heading, but I do rely on headings having an id
attribute to create table of contents in long articles like this one. I used to rely on Kramdown and its GFM option for that, but 11ty uses markdown-it which does not come with automatic heading id
generation.
To preserve that behaviour, we need to use our own markdown-it instance, as well as the markdown-it-anchor plugin. The latter comes with unicode support by default, which is not what GFM defaults to, so we also need to use uslug as a slugifier to come closer to the original behaviour.
config.setLibrary(
'md',
markdownIt({ html: true }).use(markdownItAnchor, { slugify: uslugify })
)
The last thing I couldn’t solve was that the GFM slugifier would maintain consecutive hyphens while uslug doesn’t. For instance, “Posts & permalinks” gets slugified as posts--permalinks
with GFM, but posts-permalinks
with uslug.
Jekyll, for good or for bad, seems to be playing fast and loose with file extensions. You can have Markdown in Liquid files, Liquid in Markdown files, or use the .html
extension, and Jekyll would process everything mostly how you expected it to.
11ty is a little more conservative with that which is probably a good thing. Liquid files do not compile their content as Markdown, which means everything needs to be authored as HTML in them. That can be a little cumbersome, especially when there are a lot of links within paragraphs, since they are way more convenient to author in Markdown.
To work around the problem, I decided to use the .liquid
file extension everywhere, and expose a markdown
Liquid tag which would compile its content to Markdown.
config.addPairedShortcode(
'markdown',
content => markdownIt().render(content)
)
Then, I can safely author Markdown content within Liquid files:
{% markdown %}
My name is Kitty. I’m a non-binary web developer in Berlin. I have led
the web team at [N26](https://n26.com) for over 4 years and am about
to get started at [Gorillas](https://gorillas.io). I specialise in
accessibility and inclusivity. For a longer version, [read more about
me](/about/).
{% endmarkdown %}
Surprisingly enough, 11ty compiles Markdown files with Liquid by default which can be pretty annoying in an article like this that contains Liquid syntax in code blocks since it gets evaluated literally. I had to disable the Liquid renderer for this specific article (and similar ones mentioning Liquid syntax in code snippets) by adding this to the YAML front matter:
templateEngineOverride: md
The nice thing about Jekyll is that it comes with a collection of Liquid filters to help with rendering. These filters do not exist in 11ty, so I had to recreate them. Fortunately, it’s relatively easy as they can be authored with JavaScript and injected into the configuration:
config.addFilter('date_to_string', dateToString)
config.addFilter('date_to_xmlschema', dateToXmlSchema)
config.addFilter('group_by', groupBy)
config.addFilter('number_of_words', numberOfWords)
config.addFilter('sort_by', sortBy)
config.addFilter('where', where)
If you would like to read the code for these filters, open the .eleventy.js
file on GitHub.
I used to have 2 Jekyll configuration files: one for the production site (_config.yml
), and a development one which overrides some settings during development (_config.dev.yml
). The first would expose an environment
global set to production
, and the second would overwrite it to development
. Then I would read site.environment
to know whether to register the service worker for instance.
As far as I understand, 11ty does not have a concept of environment. There is no such thing as a production build vs a development one. If anything, the development environment is just a build with watchers enabled. So it took me a while to come up with a way to know in which environment the code is compiled.
Only the Nunjucks templater allows injecting globals and I didn’t originally get that data files could be authored in something else than JSON, so I decided to create a Liquid tag which would only output its content in production.
config.addPairedShortcode(
'production',
content => process.env.NODE_ENV === 'production' ? content : undefined
)
Then I used it in my Liquid templates to wrap content that should only be rendered when the NODE_ENV
environment variable is set to production
. I don’t set it anywhere locally, and it’s set to production
when building on Netlify.
{% production %}
<script>
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('/service-worker.js', { scope: '/' })
}
</script>
{% endproduction %}
Browsing the documentation, I eventually found out that environment variables can be exposed through a .js
data file. That’s what I finally opted for: environment: process.env.NODE_ENV
.
{% if site.environment == 'production' %}
<script>
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('/service-worker.js', { scope: '/' })
}
</script>
{% endif %}
Would I recommend someone to migrate from Jekyll to 11ty? Not necessarily. Once again, Jekyll is still a robust blogging system. For good or for bad, it is pretty opinionated which makes getting started a little easier in my opinion. 11ty feels more flexible which is nice, but can be daunting at the same time.
That being said, 11ty fills a glaring gap in the static site generator landscape: a simple and extensible platform written on Node which does not enforce the usage of single-page applications like Gatsby can do. In the grand scheme of things, it’s still a recent technology, and I can see it flourish in the next year or two. ✨
On my side, I’m going to experiment with a few things now that my site is built in an environment I can control better, namely:
That’s it for today folks! Stay safe. 💚
]]>A few years back, I wrote Accessible footnotes with CSS, now the first result when asking Google for “accessible footnotes”. To this day, I still think it’s one of the most useful articles I’ve ever written because: a) most footnotes out there are not accessible and b) the CSS in that demo is actually pretty clever and was fun to write.
Today, I would like to revisit that implementation for using it in React. If you are interested in a ready-to-go solution, I am currently working on react-a11y-footnotes, an experimental library that you can install directly from npm to use in your projects.
First of all, let’s sort out nomenclature so we are all on the same page:
Here is how a footnote reference should be marked:
<p>
Something about
<a
href="#css-counters-note"
id="css-counters-ref"
aria-describedby="footnotes-label"
role="doc-noteref"
>CSS counters</a
>
that deserves a footnote explaining what they are.
</p>
Let’s break it down:
href
attribute is what makes the footnote reference link to the actual footnote further down the page. It is a regular anchor link pointing to an id
.id
attribute is necessary to be able to come back to the footnote reference after having read the footnote.aria-describedby
attribute is indicating that the anchor link is a footnote reference. It refers to the title of the footnotes section.role
attribute is used for digital publishing (DPUB) contexts to indicate that the link is a footnote reference.And here is out the footnotes section would be authored:
<footer role="doc-endnotes">
<h2 id="footnotes-label">Footnotes</h2>
<ol>
<li id="css-counters-note">
CSS counters are, in essence, variables maintained by CSS whose values may
be incremented by CSS rules to track how many times they’re used.
<a
href="#css-counters-ref"
aria-label="Back to reference 1"
role="doc-backlink"
>↩</a
>
</li>
</ol>
</footer>
Again, let’s break it down:
<footer>
per se, but it seems like an appropriate sectioning element for footnotes. I guess <aside>
could also do the trick, and in last resort, a <div>
.role
attribute is used for digital publishing (DPUB) contexts to indicate that this section contains footnotes.id
since it is referred to by every reference in their aria-describedby
attribute.aria-label
attribute is necessary to provide explanatory content if the link text does not (like here, with an icon).As you can see, there is quite a lot to unpack, and you can soon realise why maintaining footnotes by hand can be tedious and error-prone.
My React implementation of footnotes aims at making it easier to author the references, and making it automatic to author the footnotes—including their numbering. To do that, it needs 3 different part:
FootnoteRef
component that will render a reference (an anchor tag with all the necessary attributes).Footnotes
component that will render the footer and all the footnotes in the correct order.FootnotesProvider
context that will tie all of this together by storing registered references in the correct order to provide relevant footnotes to the footer.Coming back at our initial example, the usage might look like this:
const BlogPage = props => (
<FootnotesProvider>
<article>
<p>
Something about{' '}
<FootnoteRef description='CSS Counters are, in essence, variables maintained by CSS whose values may be incremented by CSS rules to track how many times they’re used.'>
CSS counters
</FootnoteRef>
that deserves a footnote explaining what they are.
</p>
{/* Some more content */}
<Footnotes />
</article>
</FootnotesProvider>
)
What’s nice about this approach is that footnotes are essentially out of sight, out of mind. The footnote itself is authored as the description
prop on the FootnoteRef
component, which makes it easy to maintain. The Footnotes
component does the work of laying out the footnotes in the order of appearance in the text.
I hope react-a11y-footnotes will help people implement clean and accessible footnotes for everyone. I’m currently finalising the API and will most likely publish a first version some time this week.
I am also playing with providing optional basic styling—especially for the references themselves since they currently rely on CSS counters—to make it easy to import the library, its styles, and start footnoting.
If you have any suggestion, comment or issue, feel free to share on Twitter or in an issue on the GitHub repository!
]]>The year is 2016. Mike Smart and I just joined N26 in Berlin to revamp their web strategy, hire a team and build a platform that will last longer than the previous one. With but a rough idea in mind of what we wanted to achieve, we had a mountain of decisions to take. Amongst them, how to author our styles.
We originally started with CSS modules. This was principally motivated by the fact that I was under a writing contract for a book on the matter at the time. Don’t waste your time looking for it, it was neither written nor published and given CSS modules has lost a lot of its traction in favour of more modern solutions, maybe it’s for the best after all. CSS modules also came with create-react-app if I’m not mistaken, which is what we started with (before ejecting literally during our first week).
There were good things and bad things with CSS modules. On one hand, writing plain CSS was nice and we knew it would come with virtually no learning curve for people joining us down the line. On the other, style composition was a little clumsy (probably because we didn’t know how to do it well) and variables were a mix between Sass and JavaScript imports, but neither really.
@value blue, teal from '../../styles/variables.css';
.base {
composes: base from "./index.css";
background-color: blue;
border: 2px solid blue;
color: white;
padding: 0.5em 1em;
}
We were already 2,000 commits in the making at that stage, and our roadmap was getting clearer and clearer: we’d end up with multiple large-scale projects within the same codebase, and I was growing worried of our CSS scaling poorly in the long run. That’s when on February 17th 2017, two weeks before our very first live release, I came to work one morning and told Mike “hear me out… how about CSS-in-JS?”
I had done some research on JS libraries for styling, and while the ecosystem was nowhere near what it is today, there were a few contenders: styled-components, Fela, Aphrodite and Emotion were all in 0.x or v1 at most and JSS was going strong for over 2 years already. So there were definitely options—or so we thought. Now, we had 2 main constraints (besides obvious aesthetic considerations):
At that time, styled-components was getting hyped for its elegant syntactic approach so we were really hoping we could get to use it. Unfortunately, it did not support server-side rendering until v2, and it still does not provide support for atomic CSS output. That meant styled-components was not an option.
Other libraries did offer SSR support, but they didn’t give the ability to get atomic classes. Some were built with atomic CSS in mind, but they did not integrate nicely in a React ecosystem. Long story short, it turns out that we didn’t have that many options in the end.
Fela offered a glimmer of hope though. It did support SSR from its very first version, and was designed in such way that it was possible to author monolithic CSS and get atomic output (more on that later). Bingo, we had a winner and I rewrote our entire styling layer in the few days before launch.
Fastforward late 2020, what is Fela? Fela describes itself as a small, high-performant and framework-agnostic toolbelt to handle state-driven styling in JavaScript. It continues stating it is dynamic by design and renders styles depending on the application state.
If I had to describe it, I would say Fela is an ecosystem of styling utilities to write styles for JavaScript application (from vanilla to React, from Angular to Inferno, from React Native to ReasonML). At the core, it’s a small styling engine, on which can be plugged extensions and enhancers to make it proper to one’s project.
Robin Weser, the developer behind Fela, considers it to be feature-complete. It hasn’t changed too much in a while because it doesn’t need much more by now. It should either provide the tools one needs, or make it possible to author these tools in a straightforward fashion.
Today, Fela is one CSS-in-JS library amongst others, and to some extent they all more or less do the same things: dynamic styling, performant rendering, optimisations… Still, there are a few things where Fela shines.
I think the main benefit of Fela is the ability to output styles in an atomic way without enforcing authoring styles as such. Authors get to write CSS as they would usually do (in a “monolithic way”), and the tool does the hard job of outputing atomic classes for maximum performance.
Consider the following React components styling two p
as a coloured squares (Fela integration code omitted for sake of simplicity):
const a = () => ({
width: "5em",
height: "5em",
backgroundColor: "deepskyblue",
});
const b = () => ({
width: "5em",
height: "5em",
backgroundColor: "deeppink",
});
const SquareA = (props) => <p className={rule}>I’m deepskyblue!</p>;
const SquareB = (props) => <p className={rule}>I’m pink!</p>;
Now the output would look like this (prettified for illustration):
.a {
width: 5em;
}
.b {
height: 5em;
}
.c {
background-color: deepskyblue;
}
.d {
background-color: deeppink;
}
<p class="a b c">I’m blue!</p>
<p class="a b d">I’m pink!</p>
While this might look like unnecessary optimisation on such a reduced example, it does matter on projects growing fast and large. This effectively caps the amount of CSS that gets shipped to the browser to the amount of different CSS declarations. Of course, there will be quite a lot (every different padding, margin, colour and so on) but there will be an upper limit. Particularly when following a design system or component library where styling is dictated by a strict set of reusable rules.
This is what makes Fela really stand out from other similar CSS-in-JS libraries. Atomic CSS happens silently and out of the box without having to think in an atomic way. No need to remember atomic class names, or force a specific naming convention; keep writing CSS as always (except, well, as JavaScript objects), and benefit from highly performant output.
To this day, if there is one thing that I always found impressive about Fela is its rich ecosystem of utilities and plugins, especially considering they are almost all authored and maintained by Robin Weser, the original creator, and part of the main lerna repo.
Even pretty advanced behaviour such as responsive properties—properties whose value varies across pre-defined breakpoints—or extensive testing of state-specific styles (e.g. hover) are already built and ready to use.
And if something happens to be missing, Fela is very easy to customise with plugins and enhancers. Both are essentially functions to customise style processing.
Having never worked on a project requiring right-to-left support, I unfortunately have very little experience in that area. That being said, Fela’s support for RTL styling is excellent, especially when compared to other CSS-in-JS libraries (it even has bidi support).
What’s particularly interesting about the way Fela handle RTL is that it can be localised to specific sub-trees. This makes it especially relevant for internationalised applications with certain parts of the UI needing right-to-left content. The configuration is not set globally at the root level (although it can), and can be configured at will within the tree.
Nothing is ever perfect, and while Fela has been fantastic looking back at the last 4 years, it also came with some ups and downs along the way. Allow me to paint you a word picture.
Shorthand and longhands are somewhat conflicting, which can be messy when not properly enforced with either strict methodology or a plugin. For instance, if you apply padding
with the shorthand in one component, but the longhand properties in another, these properties could end up conflicting (just like in CSS).
This is actually outlined in Fela’s documentation, and recommended to use longhands everywhere to avoid these situations. There is also the fela-plugin-expand-shorthand official package to break down shorthand declarations in their longhand properties.
Fela, comparatively to styled-components especially, has a relatively small community. Omitting the occasional minor contributor, Robin Weser is basically the sole maintainer although he is currently sponsored to maintain Fela as part of his full-time work.
On the bright side, it got us to actually invite Robin to come visit the N26 office in Berlin to have a look at our code base and help us diagnose a mismatch issue we were having. And also have some delicious vegan food. ✨
Being almost 5 years old, Fela evolved alongside React. When we started using it in 2017, higher-order components were all the hype. So every component needing styles would end up being wrapped with the connect
higher-order component that would provide resolved classNames.
import { connect } from "react-fela";
const square = () => ({ width: "5em", height: "5em" });
const Square = connect({ square })((props) => (
<p className={props.styles.square}>I’m a square!</p>
));
And soon enough, higher-order components were not the way to go anymore, and render functions were supposedly a better approach, so we’d use FelaComponent
everywhere:
import { FelaComponent } from "react-fela";
const square = () => ({ width: "5em", height: "5em" });
const Square = (props) => (
<FelaComponent style={square}>
{({ className }) => <p className={className}>I’m a square!</p>}
</FelaComponent>
);
And while render functions are great, they also clutter the JSX quite a lot so we turned to creating our styled containers with createComponent
.
import { createComponent } from "react-fela";
const square = () => ({ width: "5em", height: "5em" });
const Styled = createComponent(styles.square, "p");
const Square = (props) => <Styled>I’m a square!</Styled>;
And it’s pretty great until you start passing a lot of prop to your components for styling purposes, and only want some of them to make their way to the DOM as actual HTML attributes. So there is a hook instead:
import { useFela } from "react-fela";
const square = () => ({ width: "5em", height: "5em" });
const Square = (props) => {
const { css } = useFela();
return <p className={css(square)}>I’m a square!</p>;
};
As of writing, it seems that this is the way forward. Robin confirmed using the useFela
hook was the recommended way, and the fact that there are so many approaches to using Fela is a side-effect of it growing alongside React and its evolving design patterns.
I think most libs had that issue since its kinda linked to how React evolved. In the beginning it was all about HoCs until the render-props pattern emerged just to be dethroned by hooks later on.
So the official recommend way will be hooks for everyone on react > 16.3 these days. I’m going to reflect that in the new docs. It’s the fastest and most simple API of all yet the others are totally fine.
I just don’t like them anymore since you need to be more careful with e.g. the props passthrough where hooks are not tied to the rendering at all—they just provide a nice CSS API just like Emotion has.
— Robin Weser, creator of Fela about the evolution of its API
This API evolution is not Fela’s fault per se. If anything, it is a testament of it keeping up with what the React community wants to use. Nevertheless, it did give us some challenge to keep our code base clean and up to date. Full disclosure, we never migrated to useFela
and still use createComponent
everywhere. At least it’s consistent.
Fela provides a lot of useful plugins to ease development, such as beautified styles, Enzyme bindings, a layout debugger, a verbose logger, performance audits, styling statistics to name just a few.
What I wanted was having these dependencies as devDependencies
since this is what they are: development dependencies. The problem came when importing these dependencies in the file instantiating the Fela renderer: all good in development, but broken in production since these dependencies were not installed.
It took me a bit of fiddling to figure out a solution involving Webpack. I would assume it would work similarly in any bundler able to inject global variables during compilation.
The main idea is to have 2 different files exporting plugins and enhances: one for development (fela.development.js
), and one for production (fela.production.js
). The development one could look like this:
import beautifier from "fela-beautifier";
import validator from "fela-plugin-validator";
import embedded from "fela-plugin-embedded";
export const enhancers = [beautifier()];
export const plugins = [validator(), embedded()];
And the production one:
import embedded from "fela-plugin-embedded";
export const enhancers = [];
export const plugins = [embedded()];
Then in Webpack, provide the content of the correct file as a global variable (e.g. FELA_CONFIG
) based on the environment:
// Using some Fela plugins/enhancers in development exclusively,
// which are (and should be) `devDependencies`. Relying on Webpack
// to provide them to the application to avoid a crash on production
// environments where `devDependencies` are absent.
new webpack.ProvidePlugin({
FELA_CONFIG: path.resolve(`src/fela.${process.env.NODE_ENV}.js`),
});
Finally, when instantiating the Fela renderer, read the plugins and enhancers from the global FELA_CONFIG
variable.
/* global FELA_CONFIG */
export default createRenderer({
plugins: FELA_CONFIG.plugins,
enhancers: FELA_CONFIG.enhancers,
});
react-dates is a fantastic date-picker library from AirBnB. It’s built on top of Aphrodite and comes with monolithic class names by default in order to be unopinionated regarding the styling layer.
It took us some time to figure out how to integrate it properly with Fela so styles are applied atomically with Fela (and therefore optimised) instead of through the original CSS classes. Fortunately, react-dates offers a way to customise the rendering process with react-with-styles
interfaces.
import ThemedStyleSheet from "react-with-styles/lib/ThemedStyleSheet";
ThemedStyleSheet.registerInterface(OurFelaInterface);
Now we just had to write an interface for Fela. I’m going to save you the trouble and show you how it looks. It needs the Fela renderer as an argument in order to compute resolved class names.
import { StyleSheet } from "fela-tools";
import { combineRules } from "fela";
// Custom `react-with-styles` interface for Fela:
// https://github.com/airbnb/react-with-styles
export default (renderer) => ({
create(styleHash) {
return StyleSheet.create(styleHash);
},
resolve(stylesArray) {
const styles = stylesArray.flat();
const rules = [];
const classNames = [];
// This is run on potentially every node in the tree when rendering,
// where performance is critical. Normally we would prefer using
// `forEach`, but old-fashioned `for` loops are slightly faster.
for (let i = 0; i < styles.length; i += 1) {
const style = styles[i];
if (!style) continue;
if (style.ruleName) classNames.push(style.ruleName);
if (typeof style === "function") rules.push(style);
else rules.push(() => style);
}
const rule = combineRules(...rules);
const classes = renderer.renderRule(combineRules(...rules));
classNames.push(classes);
return { className: classNames.join(" ") };
},
});
One minor problem with atomic classes is that they tend to be incorrectly flagged by adblockers as elements to be hidden. This is something we learnt the hard way mid-2017 and that we fixed in Fela directly with the filterClassName
option.
By default, Fela now skips the .ad
class, but there are more to add to the list to make sure no adblocker mess with the styles.
const SKIPPED_CLASSNAMES = [
// Short for “advertisment”
"ad",
"ads",
"adv",
// See: https://github.com/adblockultimate/AdBlocker-Ultimate-for-Chrome/blob/3f07afbffa5c389270abe9ee4dc13333ca35613e/filters/filter_9.txt#L867
"bi",
"fb",
"ig",
"pin",
"tw",
"vk",
];
export default createRenderer({
filterClassName: (className) => !SKIPPED_CLASSNAMES.includes(className),
});
Thanks to the fela-plugin-custom-property package, it is possible to add support for custom properties. Not the CSS kind though. In that case, custom properties refers to custom-named object properties and their process towards CSS. This plugin can be leveraged to implement warnings or post-processing when writing specific declarations.
Consider for a moment that you expect all your durations to be authored in milliseconds instead of seconds. By surcharging the duration properties, you can warn or even manipulate their value through Fela. For instance, convertion the values into milliseconds:
import custom from "fela-plugin-custom-property";
const handleDuration = (property) => (value) => ({
// Convert durations expressed in seconds into milliseconds
// E.g. 0.2s, 1s -> 200ms, 1000ms
[property]: value.replace(
/([\d\.]+)[m^]*s/g,
(_, a) => Number(a) * 1000 + "ms"
),
});
const renderer = createRenderer({
plugins: [
custom({
transitionDuration: handleDuration("transitionDuration"),
transitionDelay: handleDuration("transitionDelay"),
animationDuration: handleDuration("animationDuration"),
animationDelay: handleDuration("animationDelay"),
}),
],
});
All in all, Fela is an amazing piece of software. It’s pretty powerful, relatively easy to use and very performant. For small to medium scale projects—especially those based on create-react-app—I would probably stick to plain CSS, or maybe Sass. But for anything large scale, I would highly recommend Fela as a bulletproof styling solution.
Despite its relatively small community, Fela has been around for 4 years, and is still actively maintained and update. The future roadmap includes:
Robin Weser has also been working on Elodin for a few years now, an experimental universal styling language, usable across platforms. If design languages are your thing, be sure to check it out!
]]>You might be interested in reading Implementing a reduced motion mode where I go in details on how to use the OS preference alongside CSS custom properties to manage motion preferences.
The idea is to provide an easy way to access this information, and react (no pun intended) to it would it change thanks to media queries. It could be either a React hook to abstract that away, or a React component for a very declarative approach like below.
const ThankYouPage = props => (
<>
<p>Thank you for subscribing to our newsletter!</p>
<Settings.WithMotion>
<img src='./assets/party.gif' alt='Cat chasing confettis'>
</Settings.WithMotion>
</>
)
Our Settings
object (sometimes called Only
) could hold several React component such as WithMotion
, WithoutMotion
, WithTransparency
, LightMode
, WithReducedData
…
They would all essentially be based on a useMatchMedia
hook. It would query the browser for a certain preference, listen for any change, and set it in a local state for convenience.
const useMatchMedia = (query, defaultValue = false) => {
const [matches, setMatches] = React.useState(defaultValue);
React.useEffect(() => {
const q = window.matchMedia(query);
const onChange = ({ matches }) => setMatches(matches);
onChange(q);
q.addListener(onChange);
return () => q.removeListener(onChange);
}, [query]);
return matches;
};
From there, creating our React component is pretty straightforward:
export const Settings = {};
const WithMotion = ({ children }) =>
useMatchMedia("(prefers-reduced-motion: no-preference)") ? children : null;
const WithoutMotion = ({ children }) =>
useMatchMedia("(prefers-reduced-motion: reduce)") ? children : null;
const WithReducedData = ({ children }) =>
useMatchMedia("(prefers-reduced-data: reduce)") ? children : null;
const WithReducedTransparency = ({ children }) =>
useMatchMedia("(prefers-reduced-transparency: reduce)") ? children : null;
const DarkMode = ({ children }) =>
useMatchMedia("(prefers-color-scheme: dark)") ? children : null;
const LightMode = ({ children }) =>
useMatchMedia("(prefers-color-scheme: light)") ? children : null;
Settings.WithMotion = WithMotion;
Settings.WithoutMotion = WithoutMotion;
Settings.WithReducedData = WithReducedData;
Settings.WithReducedTransparency = WithReducedTransparency;
Settings.DarkMode = DarkMode;
Settings.LightMode = LightMode;
At this stage, we could add as many options as we want: viewport size, device type, contrast preference… There are a lot of possibilities.
If you prefer hooks to React components, you could write small wrapper hooks for every individual preference:
const useMotionPreference = () => {
const prefersReducedMotion = useMatchMedia(
"(prefers-reduced-motion: reduce)"
);
// Or really any API you would like… A few ideas:
// - `reduce` vs `no-preference` to match the CSS spec
// - `on` vs `off`
// - just a boolean instead
return prefersReducedMotion ? "reduced" : "default";
};
const useColorScheme = () =>
useMatchMedia("(prefers-color-schema: dark)") ? "dark" : "light";
I hope you like the idea! Adapting to users’ preferences is not only a good design principle, it can also help with accessibility (for instance, disabled animations for people suffering from vestibular disorder). If you are going to rely on the operating system’s preferences, be sure to provide a way to still customise them on a per website basis.
If you are interested in the intersection of React code and web accessibility, be sure to have a look at the following articles:
]]>But managing dependencies can be tricky. In this article, I’ll share some thoughts on how we stayed sane with dependencies at N26.
As a project grows bigger and older, it sometimes becomes difficult to know which dependency serves what purpose. For some of them, it’s pretty obvious, but for dependencies around tooling for instance, it can get tricky to keep track of what’s needed.
A good practice could be to document when to add a dependency, and why a dependency is being used if not obvious. Our documentation on the matter contains the following cheatsheet to figure out how to decide to add a new package:
Auditing dependencies is important to make sure we do not use packages afflicted with known vulnerabilities. Various projects tackle this issue at scale such as Snyk or npm audit.
I personally like npm audit
because it’s baked by npm and free to use, but the console output can be daunting. That’s why I wrote a Node script wrapping npm audit
to make the CLI output a little more digestable and actionable.
It’s not published on npm because who has time for that, but it’s available as a GitHub Gist and then can be copied and pasted in a JavaScript file in one’s project. Cool features include:
critical
dependency would throw an error. This makes it easy to include it in CI/CD pipelines.Looking for unused dependencies is not very convenient. There might be projects out there doing it, but who has time to deal with thirt-party dependencies to manage dependencies. So I wrote a very small Bash script to look whether dependencies are referenced at all in a project.
The idea is pretty straightforward: go through all the dependencies
(or devDependencies
), and then search within the project whether they are referenced, prefixed with an open quote (e.g. 'lodash
).
This specific search pattern will make sure to work for:
require
calls (e.g. require('lodash')
)import
statements (e.g. import lodash from 'lodash'
)import lodash from 'lodash/fp
)If you happen to use double-quotes, you will need to update the script to reference a double-quote ("
) instead of single-quote ('
).
When extracted as a little groom_deps
function in one’s .zshrc
or .bashrc
file, it can be used within any project pretty conveniently. The type of dependencies (dependencies
, devDependencies
or peerDependencies
) can be passed as an argument and defaults to dependencies
.
function groom_deps {
key=${1:-dependencies}
for dep in $(cat package.json | jq -cr ".$key|keys|.[]");
do [[ -z "$(grep -r --exclude-dir=node_modules "'${dep}" .)" ]] && echo "$dep appears unused";
done
}
groom_deps devDependencies
Note that some dependencies are required while not being imported anywhere in JavaScript code. For instance, @babel/polyfill
, iltorb
or other similar dependencies can be necessary while not being explicitly mentioned in JavaScript code. Therefore, tread carefully.
The above script requires jq, which is a command-line utility to manipulate JSON.
You might be familiar with third-party tools like Dependabot or Greenkeeper to automatically submit pull-requests to update dependencies. They are nice, but they also have downsides:
That’s why a long time ago I authored a small Node program to look for outdated dependencies. Similar packages exist as well, this is just my take on it.
It works like this: it goes through the dependencies
(and optionally devDependencies
and peerDependencies
) of the given package.json
file. For each package, it requests information from the npm registry, and compares the versions to see if the one listed is the latest one. If it is not, it mentions it.
The output could look something like this:
Unsafe updates
==============
Major version bumps or any version bumps prior to the first major release (0.y.z).
* chalk @ 4.1.0 is available (currently ^2.4.2)
* commander @ 6.2.0 is available (currently ^3.0.0)
* ora @ 5.1.0 is available (currently ^3.4.0)
* pacote @ 11.1.13 is available (currently ^9.5.8)
* semver @ 7.3.2 is available (currently ^6.3.0)
* ava @ 3.13.0 is available (currently ^2.3.0)
* standard @ 16.0.3 is available (currently ^14.1.0)
npm install --save chalk@4.1.0 commander@6.2.0 ora@5.1.0 pacote@11.1.13 semver@7.3.2
npm install --save-dev ava@3.13.0 standard@16.0.3
I actually never published the package on npm because I couldn’t be bothered to find a name that wasn’t already taken yet. The current recommended usage is to clone it locally and use it through Node or the CLI. I personally added the little snippet to my .zshrc
file so it provides me a deps
function I can run in a project to look for dependency updates.
function deps() {
node ../dependency-checker/bin -p package.json --dev --no-pr
}
The script is by no mean perfect:
That’s all I have. What about you, what are your tricks to keep your sanity when dealing with lots of dependencies in large projects?
]]>In an ever-growing code base, it can be tedious, not to mention difficult, to look for code that is no longer used (or “dead code”). It cannot realistically done by hand, and I don’t know any solid tool that can automate that entirely.
So I wrote a small Bash script to take its best guess as to which files were no longer used. To detect that, I rely on the fact that all our imports look the same. For instance:
import Input from "@components/Input";
import looksLikeEmail from "@helpers/looksLikeEmail";
The leading at-sign (@
) is a Webpack alias to mean “from the root of the project”. This makes it more convenient to import files from anywhere. It has no incidence on the purpose of this article however.
This means if we search for /Input'
and find no result, it means the Input
component is never imported anywhere. This only works because we never add /index
or /index.js
at the end of our imports.
Now, we only have to loop through all paths in our components
directory (or any other), and perform a search for every one of them. If the search yields nothing, the component is unused.
# Loop over every entry within the given path
for entry in src/components/*
do
# Grab only the directory name
# (e.g. `Input` from `src/components/Input`)
name=$(basename $entry)
# Perform a search in the `./src` directory
# and echo the path if it yields nothing
if [[ -z "$(grep -r "/$name'" ./src)" ]]; then
echo "$entry is unused"
fi
done
A convenient way to execute that code is to define it as a function in one’s .bashrc
or .zshrc
file. When wrapped as a function, it might look like this:
function groom {
root="${2:-.}"
for entry in "$1"/*
do
name=$(basename $entry)
if [[ -z "$(grep -r "/$name'" $root)" ]]; then
echo "$entry is unused"
fi
done
}
It can then be used by passing the folder to ‘groom’ as an argument, and the root directory for the code search as a second argument (./
by default):
groom src/components
groom src/components src
It’s not much, but I hope this helps you finding some dead code without having to rely on build tools or dependencies. It’s a pretty low hanging fruit. ✨
]]>Please note that I am by no mean a Jenkins expert. I’m a frontend developer at heart, so this is all pretty alien to me. I just wanted to share what I learnt but this might not be optimal in any way.
The declarative syntax is nice for simple things, but it is eventually quite limited in what can be done. For more complex things, consider using the scripted pipeline, which can be authored with Groovy.
I would personally recommend this structure:
// All your globals and helpers
node {
try {
// Your actual pipeline code
} catch (error) {
// Global error handler for your pipeline
}
}
For more information between the scripted and the declarative syntaxes, refer to the Jenkins documentation on the pipeline syntax.
By default, Jenkins tends to resort to fast failing strategies. Parallel branches will all fail if one of them does, and sub-jobs will propagate their failure to their parent. These are good defaults in my opinion, but they can also be a problem when doing more complex things.
When parallelising tasks with the parallel
function, you can opt-out to this fast-failing behaviour with the failFast
key. I’m not super comfortable with the idea of having an arbitrarily named key on the argument of parallel
but heh, it is what it is.
Map<String, Object> branches = [:]
// Opt-out to fail-fast behaviour
branches.failFast = false
branches.foo = { /* … */ }
branches.bar = { /* … */ }
parallel branches
For programmatically scheduled jobs, you can also opt-out the failures being propagated up the execution tree with the propagate
option:
final build = steps.build(
job: 'path/to/job',
parameters: [],
propagate: false
)
The nice thing about this is that you can then use build.status
to read whether the job was successful or not. We use that when scheduling sub-jobs to run our end-to-end tests, and reacting to tests having failed within terminating the parent job.
For performance reasons, we have a case where we want to run two tasks in parallel (foo
and bar
for sake of simplicty), but whether or not one of these tasks (bar
) should run at all depends on environment factors. It took a bit of fidling to figure out how to skip the parallelisation when there is only one branch:
def branches = [:]
// Define the branch that should always run
branches.foo = { /* … */ }
if (shouldRunBar) {
branches.bar = { /* … */ }
parallel branches
} else {
// Otherwise skip parallelisation and manually execute the first branch
branches.foo()
}
I don’t know how universal this is, but if you would like to mark a stage as actually skipped (and not just guard your code with a if statement), you can use the following monstrosity. This will effectively change the layout in BlueOcean to illustrate the skip.
org.jenkinsci.plugins.pipeline.modeldefinition.Utils.markStageSkippedForConditional("${STAGE_NAME}")
For instance:
stage('Second') {
if (env == 'live') {
skipStage()
} else {
// Test code
}
}
It can happen that some specific tasks are flaky. Maybe it’s a test that sometimes fail, or a fragile install, or whatnot. Jenkins has a built-in way to retry a block for a certain amount of times.
retry (3) {
sh "npm ci"
}
Our testing setup is pretty complex. We run a lot of Cypress tests, and they interact with the staging backend, so they can be flaky. We cannot afford to restart the entire build from scratch every time a request fails during the tests, so we have built a lot of resilience within our test setup.
On top of automatic retrying of failing steps (both from Cypress behaviour and from a more advanced home made strategy), we also have a way to manually retry a stage if it failed. The idea is that it does not immediately fail the build—it waits for input (“Proceed” or “Abort”) until the stage either passes or is manually aborted.
stage('Tests') {
waitUntil {
try {
// Run tests
return true
} catch (error) {
// This will offer a boolean option to retry the stage. Since
// it is within a `waitUntil` block, proceeding will restart
// the body of the function. Aborting results in an abort
// error, which causes the `waitUntil` block to exit with an
// error.
input 'Retry stage?'
return false
}
}
}
When you are not quite ready for continuous deployment, having a stage to confirm whether the build should deploy to production can be handy.
stage('Confirmation') {
timeout(time: 60, unit: 'MINUTES') {
input "Release to production?"
}
}
We use the input
command to await for input (a boolean value labeled “Proceed” or “Abort” by default). If confirmed, the pipeline will move on to the next instruction. If declined, the input
function will throw an interruption error.
We also wrap the input
command in a timeout
block to avoid having builds queued endlessly all waiting for confirmation. If no interaction was performed within an hour, the input will be considered rejected.
To avoid missing this stage, it can be interesting to make it send a notification of some sort (Slack, Discord, email…).
To know whether a build is aborted, you could wrap your entire pipeline in a try/catch block, and then use the following mess in the catch.
node {
try {
// The whole thing
} catch (error) {
if ("${error}".startsWith('org.jenkinsci.plugins.workflow.steps.FlowInterruptedException')) {
// Build was aborted
} else {
// Build failed
}
}
}
It can be interesting for a build to archive some of its assets (known as “artefacts” in the Jenkins jargon). For instance, if you run Cypress tests as part of your pipeline, you might want to archive the failing screenshots so they can be browsed from the build page on Jenkins.
try {
sh "cypress run"
} catch (error) {
archiveArtifacts(
artifacts: "cypress/screenshots/**/*.png",
fingerprint: true,
allowEmptyArchive: true
)
}
Artefacts can also be retrieved programmatically across builds. We use that feature to know which tests to retry in subsequent runs. Our test job archives a JSON file listing failing specs, and the main job collects that file to run only these specs the 2nd time.
final build = steps.build(job: 'path/to/job', propagate: false)
// Copy in the root directory the artefacts archived by the sub-job,
// referred to by its name and job number
if (build.status = 'FAILURE') {
copyArtifacts(
projectName: 'path/to/job',
selector: specific("${build.number}")
)
}
That’s about it. If you think I’ve made a gross error in this article, please let me know on Twitter. And if I’ve helped you, I would also love to know! 💖
]]>Disclaimer: this is not an official job offer. Please refer to the position on the N26 website to formally apply.
I will use certain terms over and over in this post, so I’d like to clarify terminology first and foremost so the rest makes sense:
As of today, there are about 15 engineers working on the customer-facing web platform, distributed across the product department in cross-functional teams, and located in our 4 offices (well, at least officially; they are currently working home)—Berlin, Barcelona, Vienna and New-York.
The web team is absolutely fantastic. 💖 Not only is it quite mature with people having been there for several years, it also values inclusion and respect at its core. There are no overly inflated egos, no openly passive-aggressive behaviours.
It is a diverse group of people, from various genders, nationalities, backgrounds, with a common understanding of what it means to work with one another in a highly toxic and biased industry that stacked cards against certain groups of people.
While it did serve us to have two core tech leads for the longest time, the truth is there is less of a need for such bi-headed leadership at this stage. Thanks to a bit of restructuring to decentralise authority in order to empower our most senior engineers to move towards tech lead positions within their segment of work, we are now looking for a single person to replace the both of us as tech lead for the Core segment.
This role is definitely a little hybrid. Surprisingly enough, there is not a whole lot of feature development involved as this person would not work in a cross-functional team on product features. The goal is mainly to care after the web team and the web platform and its underlying code base, maintaining the high quality standards already in place, and caring for web engineers all across.
A non-exhaustive list of responsibilities would go like this:
Make sure web engineers get to do their best work without being limited or restrained by technical decisions. That implies a good deal of communication with them on a regular basis to make sure they are doing alright, can progress and understand what’s going on.
Ensure the web mono-repo remains up-to-date and tech-debt free, and continues to serve the goals of the company by enabling fast and continuous delivery of features and improvements across all web projects (website, support center, signup, web-app and webviews).
Continue to uphold high quality standards, especially when it comes to accessibility, security, testing and documentation. We have always gone to great lengths to deliver stellar work, which scales and stands the test of time, and it should continue this way.
Look after testability, deliverability and observability of the web platform. That might mean improving testing stability and speed, optimising the deployment pipelines, and working towards having necessary stats and metrics necessary to ensure permanent good health for the code base and the team.
Own the continuous integration and deployment pipeline of the web platform (and its testing and reporting strategy), and lead the migration from Jenkins to GitHub Actions and Kubernetes.
We need someone with strong understanding of frontend development. While the mileage may vary, I think anyone who hasn’t worked in frontend and beyond for at least 5 years might be falling a little short for such a role. The code base is vast and somewhat complex, and this has to be balanced with difficult topics such as legal compliance, inclusive design, incident management, infrastructure work and of course, diplomacy and cross-team communication. This is not as scary as it sounds, but it definitely requires some experience. In other words, we probably need a senior fullstack engineer here.
More than that, we need someone who can join a team that puts people at the center of what they do—before code, before design, before product and bureaucracy. I believe this is one of the reasons people enjoy working in this environment, and I expect the next person to cultivate that empathetic mindset. That means making sure we run stress-free, we can make mistakes, we can learn from them.
Summing up what kind of candidate we would like to see taking on that role:
At least 5 years of frontend experience with some degree of expertise in accessibility, security, testing, documentation and observability. Additionally, due to the “fullstackiness” of the role, it is important to have experience with Node, Apollo GraphQL and desire to work on CI/CD.
Being able to show empathy and people skills (for lack of a better term). Even though this role has no direct reporting lines, a non-trivial part of the job will be managing expectations, aligning teams/goals, and ensuring everyone is doing well and can work efficiently. It’s important to be comfortable in that aspect, and be willing to do it.
Some diplomacy, stability, and ability to work through difficult situations such as production incidents. Fortunately, there are very few incidents impacting web, but it happens. Being able to investigate issues, communicate outcomes, inform other departments such as customer support; it’s all to be expected.
If you’ve been following along this post, I think you’ll know what I’m going to say next. Here are a few reasons I would recommend that opportunity:
👩🏽💻 The web team is quite diverse, and cares about the environment in which we work and the way we communicate with each other. If you know anyone who’s interviewed for our team, they’ll tell you we actually discuss diversity and inclusion during out tech interview. It’s genuinely a wonderful team to be a part of.
⚙️ The web code base is modern and, for all intents and purposes, pretty damn clean. Keeping things tidy and understandable was a non-trivial part of my job for the last few years, and I hope that shows. Things are mostly consistent through and through, and all web features are built the same way, so the learning curve is essentially non-existent when navigating projects.
📖 We have a lot of documentation. If you have been following my work, you know how passionate I am about technical documentation. I have written about it on many occasions, and our docs is the pinnacle of my role at N26. I encourage you to read more about the N26 docs.
♿️ We really care about accessibility. We have baked this topic in our ways of working virtually since day 1 and have been leading the topic at N26 for a long time. We automate accessibility testing where possible, and have documentation and knowledge sharing on the topic. This is not that common, and it’s a great environment to work in.
✅ Our motto has long been “do your work and go home on time” (or “stay home” since, you know, COVID). I’ve spoken repeatedly against people working too early or too late or during weekends (although I have done it myself on many occasions, thus being both hypocritical and a poor leadership example). This means we focus on testing automation, so people never have to deal with incidents. There is a formal on-call process which is opt-in and incredibly chill for web as anything seldom happens.
If all of this sounds interesting to you, and you think you’d be a good fit, please apply on the official job posting. If you would like more information about the role or have any question, you can ask me on Twitter—my DMs are open.
Last but not least, I would absolutely highly encourage people from under-represented groups to apply. Please, please, do apply. ✨
]]>brew doctor
). The goal is to emit a lot of information about the system and working environment (git status, system, environment variables…) so the output can be shared with someone to compare with.
As you will see, there is quite a lot of information in there. And while most of it was relatively easy to access and display, some bits were trickier than I thought so, here we are.
Without further ado, let me show you what the script outputs (without fancy colours, sorry):
===============================================================================
* System *
===============================================================================
Operating System: Mac OS X 10.15.6
Distribution: darwin
CPUs: 12
Internet: true
VPN: none
→ Currently not on any VPN; consider connecting to the VPN.
Docker running: true
===============================================================================
* Node *
===============================================================================
Version: v12.18.3
npm: 6.14.8
nvm: true
Env: development
Modules: 1523
Installed: 13 days ago
→ The last node_modules install is over a week old.
→ Consider reinstalling dependencies: `npm ci`.
===============================================================================
* Environment variables *
===============================================================================
HTTP port: 8080
Source maps: none
Webpack bundle analyzer: false
Webpack metrics: false
Node process inspect: false
Verbosity level: info
Memory cache: true
Local API: staging
Code instrumentation: false
===============================================================================
* Git *
===============================================================================
Branch: doctor-script
Difference: 1
Last commit: Add a doctor script
Clean: false
Interestingly enough, there is no obvious way to check whether the machine has internet access from a Node script. A StackOverflow answer mentions that performing a DNS lookup on a popular domain is likely the way to go.
const hasInternetAccess = async () => {
try {
await promisify(require("dns").resolve)("www.google.com");
return true;
} catch {
return false;
}
};
Alternatively, Sindre Sorhus (no surprise there) has a handy npm package called is-online
which does essentially the same thing while being a bit more resilient to a single domain not being available.
This one has to be put in context: in the case of my team, the VPN grants us access to some APIs, so we tend to need to be connected to it in order to work. Therefore, I didn’t have to go too far here, and simply tried to ping our API domains. If it works, it means we’re on the VPN, otherwise we’re not. This is by no-mean a bulletproof solution to detect the presence of a VPN.
const ping = async (url) => {
try {
await axios.get(url, {
// This is necessary to circumvent a `UNABLE_TO_VERIFY_LEAF_SIGNATURE`
// Node.js error (at least in our case).
// See: https://stackoverflow.com/questions/20082893/unable-to-verify-leaf-signature
httpsAgent: new https.Agent({ rejectUnauthorized: false }),
});
return true;
} catch {
return false;
}
};
const onVPN = await ping("https://our.internal.api.domain");
You might be familiar with the native os
Node module which grants some insights onto the operating system details such as the platform, the amount of CPUs, and so on and so forth.
However, I wanted to detect the precise Mac version (e.g. Mac OS X 10.15.6) since we don’t all use the same. It turns out that this is not provided by the os
module—the best we get is darwin
as the platform. In another StackOverflow answer, I learnt that there is a file on all Mac systems that contains basic information about the OS.
If we could read that file, we could get the information we need. It turns out that we can definitely do that. It’s a plist
file which I came to understand is a flavour of XML for Apple systems (I guess?). In my case, I had xml2js
at the ready, but the plist
npm package might be even better.
const getMacOsVersion = async () => {
const path = "/System/Library/CoreServices/SystemVersion.plist";
const content = fs.readFileSync(path, "utf8");
const { plist } = await xml2js.parseStringPromise(content);
// Returns `Mac OS X` (at index 2) and `10.15.6` (at index 3)
return plist.dict[0].string.slice(2, 4).join(" ");
};
For a more comprehensive solution, Sindre Sorhus happens to have a package to get the Mac OS release as well as a package to get the OS name.
To better manage our Node environment, we use nvm. As part of its documentation, nvm claims one can verify the installation worked properly by running command -v nvm
.
Running this command should return nvm
if it’s installed. And it does do that just fine, but when running it from within the script with execSync
(from the child_process
native module) I got a permission error for some reason.
After much searching, I found a StackOverflow answer that explains that nvm
is meant to be sourced, which means it cannot be run programmatically from a script.
~/.nvm/nvm.sh
is not executable script, it is meant to be "sourced" (not run in a separate shell, but loaded and executed in the current shell context).
I had to change strategies, and decided to keep things simple by checking whether the $NVM_DIR
environment variable—installed by nvm—was empty or not.
const exec = (command) => cp.execSync(command).toString().trim();
const hasNvm = exec("echo $NVM_DIR") !== "";
Debugging a Node problem usually ends up with “I reinstalled my node_modules and now it works.” I was wondering if I could detect when was the last time Node modules were installed.
To do so, I thought I could check the creation date of any folder within the node_modules
directory (here I use react
because it’s one of our dependencies we’ll likely never get rid of). I initially thought I could check the node_modules
folder itself, but it turns out it’s not removed when reinstalled modules, only emptied.
I have come to understand that this will not work on all operating systems, because it relies on the timestamp at which a folder was created, which is not a standard.
const getStats = promisify(require("fs").stat);
const stats = await getStats("./node_modules/react");
const lastInstall = moment(timestamp.birthtime);
const relative = lastInstall.fromNow(); // E.g. 3 days
From there, we can emit a gentle warning if the last install is over, say, a week old.
if (moment().diff(lastInstall, "days") >= 7) {
console.warn("The last node_modules install is over a week old.");
console.warn("Consider reinstalling dependencies: `npm ci`.");
}
There are probably more elegant checks we can do regarding Docker, but I wanted a quick way to figure out whether Docker was running in the background or not. The docker version
command will only return a 0 exit code when effectively running, and a non-0 otherwise (not running or not installed).
const isDockerRunning = () => {
try {
cp.execSync("docker version", { stdio: "ignore" });
return true;
} catch {
return false;
}
};
There are a few pieces of Git information we can display: which branch are we currently on, is it clean, how far is it from the main branch, and what is the last commit?
Finding the current branch is easy, as Git provides a way to get just that. To know whether this is clean, we can use the --porcelaine
option (so sweet) of git status
, which will return an empty string if clean.
const branch = exec("git branch --show-current");
const clean = exec("git status --porcelain") === "";
Getting the amount of commits between the current branch and the main branch (in whichever way), is a little more tricky but can be done with git log
. From there, we could emit a gentle warning if it looks quite far apart:
const mainBranch = "develop";
const difference = Number(
exec(`git log --oneline ${branch} ^${mainBranch} | wc -l`)
);
const threshold = 10;
if (difference > threshold) {
console.warn(
`The local branch (${branch}) is over ${threshold} commits apart (${difference}) from ${mainBranch}; consider rebasing.`
);
}
Finally, grepping the last commit message can be done with git log
as well:
const lastCommit = exec("git log -1 --pretty=%B").trim();
I am sure there are many other details we could add to the script (find a lite version on GitHub Gist), and it will likely evolve across the next few weeks and months. Some ideas I played with but didn’t complete for not wanting to install more npm packages just for the sake of it:
node-dark-mode
from Sindre Sorhus does just that by interacting with the OS.is-camera-on
from you know who.Nevertheless, that was a lot of fun to write and figure out. If it helped you or you have any suggestion, please get in touch on Twitter! :)
]]>The following is a guest post by Jesús Ricarte, a frontend developer and volunteer translator for A List Apart in Spanish. I’m very glad to have him writing here today about line heights and using math in CSS!
Although we can apply any CSS Unit to line-height, a unitless 1.5 value is the most recommended way to handle it. To begin with, an explanatory image on how line-height is applied by the browser:
As you can see, every line-height is distributed in different areas:
Therefore we could express it as:
lineHeight = leading / 2 + content + leading / 2
Or:
line-height: calc(0.25 + 1 + 0.25);
However, this approach has a maitenance downside: as you can note in following demo, it sets too much line height in larger font sizes. In order to establish an optimal readability, we must manually tweak it on every font-size
increment, down to 1.1 on very large font sizes.
See the Pen calc line-height: demo 1 by super-simple.net (@supersimplenet) on CodePen.
To have a clearer way, let's take a look to our demo figures, on a comparison table (computed line-height values are in pixels for easier understanding):
line-height: 1.5 | line-height: 1.1 | |
---|---|---|
font-size: 10px | 15px | |
font-size: 50px | 55px |
In order get an optimal line-height
we will need to be as close as possible to 1.5 value (15px), on smaller font sizes, but closer to 1.1 (55px) on larger ones.
Wait… 11px is already pretty close to 15px. We're just a few pixels away.
So, instead of starting on a 1.5 value, why don't we flip it over? We could start down from 1.1, adding just the few pixels we need, which will make almost no visual difference in larger font sizes, but on smaller ones.
Something like:
line-height: calc(2px + 1.1 + 2px);
Revisiting our computed line-height
comparison table:
LH 1.5 | LH (2px + 1.1 + 2px) | LH 1.1 | |
---|---|---|---|
font-size: 10px | 15px | 15px | |
font-size: 50px | 59px | 55px |
That's better! We nailed it in small font sizes, and get pretty close on larger ones.
Unfortunately, line-height: calc(2px + 1.1 + 2px)
is invalid CSS, since unit & unitless values can't be mixed. Could we use any relative unit that gets computed to about 1.1?
Kind of: the ex
unit computes to current font x-height (the height of the lowercase letter “x”), so we just find out the perfect match for our formula.
In fact, any relative unit (em
, rem
…) can be used, but since we’re calculating line height, it makes sense to use a height unit.
Since every typeface has its own ex
value, we still need to fine-tune our px
& ex
values. Anyway, consider this a good starting point:
line-height: calc(2px + 2ex + 2px);
As you can see in following demo, it sets a very nice line height, in a wide range of different typefaces:
See the Pen calc line-height: demo 2 by super-simple.net (@supersimplenet) on CodePen.
That’s valid CSS. Also, the ex
unit has very good browser support. Hooray!
If you apply the formula on a parent element, and font-size
is a changed on a descendant element, line-height
would be unafected on the descendant, since it has been calculated based on parent font-size
:
.parent {
font-size: 20px;
line-height: calc(2px + 2ex + 2px);
/* computed: 2px + (2 * 20px) + 2px = 44px */
}
.parent .descendant {
font-size: 40px;
/* desired: 2px + (2 * 40px) + 2px = 84px */
/* computed: 2px + (2 * 20px) + 2px = 44px (same as .parent) */
}
This can be solved by applying the formula to all descendants, with the universal selector:
.parent * {
line-height: calc(2px + 2ex + 2px);
}
Our formula also helps with reponsive typography. Using relative-to-viewport units (vw
, vh
, vmin
, vmax
) leads to a lack of fine control, so we can't tweak line-height on every font-size change.
This issue was also tackled by CSS locks technique, which uses relatively complex arithmetic to establish a minimum and maximum line-height.
]]>Quick back-story about web at N26: we started rebuilding the web platform from scratch as well as the web team about 3.5 years ago. We had a super fragmented tech stack at the time (webapp in Backbone, site in Wordpress, support center in Node + templates…) and wanted to unify all of it. We had a few options, but ultimately decided to go with a monolith approach.
For us, it works like this: the web platform is on a single repository, but serves 4 different projects (the registration flow, the online banking application, the website and the support center). We build projects individually with Webpack, but 95% of the code-base is considered shared. In a way, our repository is a framework on which we build web projects.
N26 currently have about 20+ web engineers who all work full-time on the mono-repo albeit in different cross-functional teams. On top of that, we release our 4 web projects at the same time on a daily basis. That means we need our code-base to be in a constant “ready state”. We ensure that by having an easy to use feature flagging setup, a all-hands-on-deck peer review process, and an open and quick feedback loop. Expanding on these points below.
Because we release every day, and because most non-trivial stories need more than a day to be completed, we need a way to work without impeding live deployments.
We have build-time feature flags that can be toggled on and off per environment. They are injected as global variables with Webpack and thus benefit from dead-code elimination.
When starting a new feature, we add a new feature flag which is only enabled in local environment. Once the feature is taking shape, we enable it for the dev environment. When it’s getting ready, we enable it on staging environment. And finally, after some staging testing (both manual and automated) we can turn it on for live. We leave the flag for a day or two in the code base in case we need to turn it off, and eventually remove all the new dead code around it.
We take a very pragmatic approach to reviewing code: everybody should do it, regardless of seniority level, and we have no concept like “1 senior approval to merge”.
We trust people to make smart decisions. If it’s a small thing, having a single approval is fine. If it’s a critical refactoring, having multiple reviews, including from people more familiar with the code is recommended.
This loose policy as well as the fact we don’t debate code opinions during review means we go through 20 to 30 pull-requests a day, and most of them tend to be open for less than an hour.
We make sure we communicate, not only on pull-requests but also in person (whether physically or remotely). Most changes affecting more than a single engineer are announced on Slack, and we have a weekly meeting to talk about repo-wide improvements and refactoring so no one is left behind.
We also started doing screencast sessions, where an engineer familiar with a portion of the code-base would walk through it sharing their screen so other engineers get a sense of how things work besides documentation.
The idea is that engineers relate to the whole platform rather than their project alone. That’s critical so that we keep things aligned and not too project-specific which will lead us towards the micro-projects path.
I think another big aspect of our structure is that we have the concept of “Web Core”. The idea is that there are always some engineers working on the web platform as a whole. Things like release process, test infrastructure, dependency updates, large-scale refactoring and so on.
This is done in a unit of a few engineers changing every 2 weeks, with permanent tech leads. This way, we can keep things moving forward and up to date, and all engineers get a feeling of how our system works besides their project.
Now I must be transparent about the drawbacks of a mono-repo and a unified releasing approach.
The main thing is that we’re only as fast as our slowest project. We have a lot of automated tests for our banking application, and it can take a few hours to have a passing build. This means projects that are faster are still released only once a day at best because our slowest project cannot realistically be deployed more than once a day at the moment.
We could work towards having independent releases, but given everything is shared, that remains tricky. “Which version of the button component is currently deployed on the website?” or “Why isn’t this security patch live on the registration flow?” are not questions you want to ask.
Another possible drawback is that all projects use the exact same tech stack and system, even though that might not be the best suited approach. For instance, could we have a statically generated website instead of having server-side rendering at all? Probably. But we don’t because we didn’t design it that way, and that would be unique to a project which our codebase doesn’t quite permit.
Other than that, it’s pretty great.
We all have an impact on each other’s work—for good or for bad. That means no one is truly isolated on their own project. They are an active member of our web community and see the platform grow and improve on a daily basis, which is a good thing both from a technical standpoint but also a communication one.
All projects become better by the day by the sheer fact that they belong to the mono-repo, and that’s pretty good for maintainability (and security, performance, consistency, and whatnot). I can’t stress enough how important this all is.
TL;DR: There are quite some technical decisions I regret, but going with a mono-repo ain’t one of them.
]]>Our web banking application is almost entirely tested end-to-end with Cypress. We have about 120 suites, taking up to an hour to run.
In this article, I’d like to share how we went from having static accounts to handling dynamic account creation and authentication, and how we came up with account caching to speed up our runs.
Originally, we had a few static accounts that we manually created for test purposes. We’d have an account that didn’t confirm their email, one that did, one that didn’t go through the product selection, one that did, an account that’s premium, and so on.
These accounts’ credentials were stored in a JavaScript file, which we imported and used as part of our custom login
command at the beginning of each test.
import { STANDARD_ACCOUNT } from "@tests/utils/accounts";
describe("Personal settings", () => {
before(() => {
cy.login(STANDARD_ACCOUNT);
});
});
The problem with this strategy was that soon enough, these accounts were extensively bloated with hundred of thousands of transactions and hundreds of inactive credit cards. In turn, pages were getting slugish and the tests more and more flaky. Moreover, our tests were thus bound to a single environment.
N26 has an internal service to create accounts. We created a Cypress command to dynamically create a user through that service. Fortunately, the service comes with a lot of handy default values, so we can only pass a few key parameters.
cy.createUser({
confirmEmail: false,
residenceCountry: "ITA",
topUp: 100,
});
Under the hood, this command fires a request to the internal service, and receives the newly-created user’s information as a response. It contains a lot of data about the user, such as their identifier, name, birth date, residency, nationality—all of which is generated at random with Faker.
Then we would start all our tests with creating an account, then logging into that account with another custom command.
describe("Personal settings", () => {
before(() => {
cy.createUser().then((user) => cy.login(user));
});
});
While creating accounts on the fly for each test turned out great for test isolation and avoiding account bloating, it also slowed down our test suite quite a bit, as every test ended up doing multiple requests just to set up an account.
Because most tests are not performing destructive actions, we thought we could try caching them during a test run. For instance, the first test would create an account, then the second test would login with that account instead of creating yet another one.
Two critical aspects of that solution: it needed to be opt-in, so we don’t introduce side effects. And we needed to make sure that accounts are reused only when they are in the same state. That means for instance that a test needing an account with a deactivated card cannot reuse an account with an activated card.
We created a getAccount
command on top of our createUser
one. It takes the exact same configuration as the createUser
command, that is, the payload sent to the internal service to create a new account. The only difference is that it also accepts a cache
option that is false
by default (opt-in, remember?).
It works like this:
cache
option is not passed or false, the getAccount
just calls createUser
and that’s it.cache
option is true, the getAccount
command serialises the given configuration object, and see if a cached account for that configuration exists already.
createUser
to get an account and we store it in the cache before returning it.The code (stripped out of unnecessary things) looks like this:
const cache = new Map();
export default function getAccount(conf = {}) {
const key = stringify(conf);
if (conf.cache && cache.has(key)) {
return typeof conf.login === "undefined" || conf.login
? cy.login(cache.get(key))
: cy.wrap(cache.get(key));
}
return cy.createUser(conf).then((account) => {
if (conf.cache && account) {
cache.set(key, account);
}
return cy.wrap(account);
});
}
Note that JSON.stringify
does not guarantee key order, which means two identical objects with keys in a different order will not be stringified the same way. We use a lib that ensures key sorting to prevent that problem.
We can now start our tests with a single call to getAccount
passing the cache: true
option when possible so we retrieve accounts from local cache if available, or create and cache them otherwise.
describe("Personal settings", () => {
before(() => {
cy.getAccount({ cache: true });
});
});
I believe one of Cypress’ best features is its extensibility. Creating custom commands is trivial, and it becomes very easy to create your own testing framework on top of Cypress.
We’re consistently working on making our testing infrastructure faster and more resilient. Cypress, in many ways, enable us to do that in ways that other testing tools like Selenium could not.
I hope this helps!
]]>In most cases, the way a cookie banner works is that it renders the banner, and when the user interacts with it, it sets a value in a cookie so next page loads do not render the banner again.
We can set that cookie before loading any page thanks to a Cypress event.
In the code below, replace the value of the two main constants with the way it works for your website, and add this code snippet in Cypress “support file” (defaults to cypress/support/index.js
).
// The name of the cookie holding whether the user has accepted
// the cookie policy
const COOKIE_NAME = "cookie_notice";
// The value meaning that user has accepted the cookie policy
const COOKIE_VALUE = "ACCEPTED";
Cypress.on("window:before:load", window => {
window.document.cookie = `${COOKIE_NAME}=${COOKIE_VALUE}`;
});
If your code relies on Local Storage instead of cookies to store consent, the concept is exactly the same.
]]>Node: Node (or Node.js) is a “runtime environment”. It’s a server-side environment that runs JavaScript code. The same way your browser has a JavaScript engine, well Node has one as well. This allows you to execute JavaScript code, like scripts, outside of a browser environment.
npm: npm is the package manager for Node (despite claims it doesn’t stand for “Node Package Manager”). All languages have a package manager (Java has Maven, PHP has Composer, Ruby has RubyGems, etc.). Npm allows you to manage Node dependencies (packages), such as installing and removing them. Npm comes bundled with Node by default, so you don’t have to install it yourself.
Packages: Packages are versioned little bundles of code that people write and publish for other to use. Cypress and Faker, amongst many many others, are packages (and big ones at that).
npx: npx is another command-line utility provided by npm. It’s a bit of an all-in-one command to execute the binary (see below) of the given package name. It will try within the local project if installed, or globally on your machine if installed, or it will temporarily install it otherwise.
When you want to use a package, such as Cypress or Faker, you need to install it. There are two ways to do that: you can install it globally on your machine (with the -g
option) which is usually discouraged because a little obscure and not very manageable. Or you can install locally for your project. This is the recommended option.
When you do npm install <package>
in a directory that has a package.json
file, it will do 3 things:
It will add a line inside your package.json
file to note that the package you just installed is now a dependency of your project. That means your project relies on it.
It will add the package’s code, as well as the code of all the dependencies of that package (and their dependencies, and so on and so forth) into a directory called node_modules
. This automatically-generated directory contains the source code of all the dependencies of your project. It is usually listed in .gitignore
so that it doesn’t get committed (as it’s freaking huge and not your own code). You can safely delete this directory and reinstall all the dependencies of your project with npm install
at any time. “Have you tried reinstalling your node_modules?” is basically the debug-101 of projects using Node. 😅
It will generate (or update) a file called package-lock.json
. This is an automatically generated file that should never be updated by hand. It contains the version of all your dependencies (as well as their dependencies, and the dependencies of your dependencies, and so on and so forth). This file is a manifest that makes it possible for someone to come after you (or yourself), run npm install
, and have the exact same packages as you did. Think of it as a snapshot of all your project’s dependencies.
Alright, so let’s recap a little bit what we just learnt.
Node is an environment to execute JavaScript code. It has a package manager called npm, which is used to install (and reinstall) packages.
A project usually has dependencies, because not everything should be coded from scratch. These dependencies are installed through npm, and listed in the package.json
file. When installed, their code is in node_modules
.
Okay, so now that we have dependencies installed for our project, how do we use them? Well, that depends what these dependencies do. Let’s take two different examples: cypress
and faker
.
Cypress is a tool. It gives you commands like cypress open
and cypress run
. That’s what we call a “binary”. Basically it means it gives you something you can execute from your terminal. This executable is exposed by Cypress in the node_modules/.bin
folder. Any package that provides an executable will be located in that folder. That’s why you can run ./node_modules/.bin/cypress
(or $(npm bin)/cypress
which is the exact same thing).
Faker, on the other hand, does not provide an executable. It gives you JavaScript utilities you can import in your JavaScript code. You import that dependency doing import faker from 'faker'
in your JavaScript files. Node can magically resolve from 'faker'
by going into node_modules/faker
and finding the relevant files. That’s pretty handy so you don’t have to do import faker from './node_modules/faker/lib/something/specific/to/faker/index.js
.
Alright, so let’s sum up what we just learnt:
Some packages provide executables, some don’t. All packages providing an executable can be executed with ./node_modules/.bin/<package>
.
Most packages do not provide a command-line executable, and are made to be imported within a JavaScript file. This can be done with import something from '<package>'
. What is being imported depends on the package and can be figured out by reading its documentation.
I hope this helps!
]]>So I did, with incommensurable help from Mike Smart. So we did, us all, the web engineers that have been and are still with us to this day. In this article, I would like to share a few things I learnt and discovered along the way.
N26, like many startups, is growing fast. When I joined, we were just about 100 people. Now, it’s way over a thousand, in about 3 years. We had to hire a lot, and quickly. I am very thankful I got to lead hiring for the web team because I could made sure we balance hiring fast with hiring well.
Hiring in the tech industry is just like the tech industry itself: completely messed up. We impose unrealistic and unreasonable expectations on people. We completely overstate the value of technical skills and we think writing code is way more difficult than it is. This, in turn, creates weak homogeneous teams of fragile egos.
I wrote extensively on how we hire and—while I do think I made a few mistakes along the way—I also feel like it worked exceptionally well. At the risk of sounding cheesy, the N26 web team is by far the best team I have ever worked in. It’s made of over 20 diverse individuals who respect each other to build a good product for everyone.
We are not just a group of technicians working for the same company. And by that, I don’t mean that we are necessarily all friends, or “like a family” (which I think is also an understated wrong trait of the startup culture). I mean that we are more than the sum of our skills. We have a shared vision, with shared values, like respect, trust, and inclusion (both within, and from a product standpoint).
As your team grows, you want to cut as many sources of friction as possible when it comes to writing code. One way to do that is to make most discussions around the way to write code over before they even start.
Don’t spend time arguing about formatting: set up Prettier. Don’t spend time reviewing coding errors: set up ESLint. Don’t spend time discussing about common patterns: define and document them.
You will want your time spent discussing code to be about solving problems, not bikeshedding on the way to write said code. Writing the code truly is the easy part of our job, in part because it can be significantly eased with tools and processes.
I have recently written about our documentation. I cannot stress this enough: it’s all about documentation. I think most developers seriously tend to underestimate the benefits of properly written and maintained docs.
Here are the things that it makes easier:
👋🏻 Onboarding new team members. Having comprehensive documentation gives them autonomy, and enables them to get started faster and more comfortably. It gives people the tools to work and progress—especially to the people who crucially need these tools.
✅ Settling discussions by defining one way to do things. Of course this can change, and the one way might become another way down the line, but at any point in time, it is important to have a single common and agreed on approach.
🏝 Removing knowledge islands. One of the worst things about someone leaving (besides, you know, them leaving) is all the knowledge they are taking with them. Companies tend to think that having a month or two of overlap with the next hire is enough to minimise that, but that’s not. I can guarantee, no amount of time overlap will be enough for me to share over 3 years of company, product and code knowledge. Documentation is what will. Note that this is not too specific to someone leaving, but also applies for someone with specific knowledge not being available (other project, holidays, sickness…).
There are many reasons why a company would not invest in testing. Sometimes we “don’t have time”. Or “it’s never gonna change, no need”. Or “it’s too complicated to test”. That might be a fine decision on the spot, but that’s going to come bite you down the line.
One way to fight that problem is to not only invest in tests, but also invest in a testing framework. And by this I don’t mean Jest, Mocha or Cypress. I mean in building a tooling system that enables developers to write tests efficiently, and said tests to be run automatically at appropriate time.
We noticed that a lot of junior and mid-level engineers have only very little experience with automated testing, if at all. For most of them, it’s a bit of Jest here, and sometimes some Cypress there. Given how complex it can be to set up automated testing, I can totally understand why testing knowledge is not more widely spread.
having to mess with dependencies, environment variables, configuration and whatnot. Have them focus on the meat: writing good and relevant tests. They should not have to worry too much about where or when these tests will be run. The system should guarantee that the tests they write will be run.
Invest in your testing setup, folks. Make it good. Make it robust. Make it helpful. Don’t let it fall through the cracks.
As more and more engineers work on a given project, the technical debt will grow. That’s pretty normal, and that probably stands true for most projects, regardless of the amount of developers working on it. Because technical debt is inevitable, it is also somewhat okay. What is important is to not only acknowledge it, but also keep track of it. I would recommend maintaining a backlog of things to do.
Whenever something out of scope comes up in code review, add a ticket to the backlog describing the task. This makes sure it won’t be forgotten, and avoid riddling the code base with @TODO
s. Similarly, whenever someone has an idea for improvement, add a ticket to the backlog. It can be picked up later.
I believe we should always be able to assess the health of a code base, at least on a high level. Things like large-scale refactoring and major dependency updates should be accounted for so they don’t get forgotten.
If I had to reflect on my experience as a tech lead (or whatever fancy title it is) over these 3 years is that it’s important to let people experiment, make mistakes and take ownership. Micro-management is a counter-intuitive work methodology, and I certainly must have failed at this on multiple occasions.
For people to grow and feel valued in an organisation, they have to be able to take on responsibilities. I feel like we did a fair job at making sure people would not be imposed responsibilities they didn’t want or couldn’t live up to, but probably we could have done better at letting people take on more at times.
I have always felt conflicted between doing things myself so people don’t have to deal with them and can focus on their work, and letting people do these things at the risk of causing them stress or discomfort.
A good example of that is shipping code to production. We have released our web platform over 700 times in the last 3 years, and I must have orchestrated 90% of those releases. Mostly because it’s sometimes a little difficult, and more importantly, because I know it can be stressful for some people, especially less seasoned engineers. Now, some people were probably happy I took on this task repeatedly, but by doing so I also deprived some curious engineers from a learning opportunity.
I have recently been taught the word “sonder”. That is the realisation that everyone, including passers-by, has a life as complex as our own, which they are living despite our personal lack of awareness of it. I find it interesting because it’s all too obvious but also quite a discovery in itself. People are not NPCs in our lives. Who knew, right?
I have absurdly high expectations for myself, and sometimes I expect people to do the same about themselves. That’s not quite how things work though, and every one is trying to do the best they can. The Prime Directive of Agile says something similar:
“[W]e understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.”
— The Prime Directive of Agile
I guess the lesson here is to manage expectations. Sometimes we’re in the wrong assuming people don’t want responsibilities. What’s important is that people get to decide when they’re ready, so they remain in control of their personal growth.
There are many more things I could share about my experience at N26, but I guess that will do for now. If I have to give key takeaways for anyone having the responsibility to build a team and a platform, it’s these:
💖 Be kind. Show empathy. Trust the people you hire and work with, and constantly aim to have a safe and healthy environment for everyone. Especially for the most vulnerable people.
🤔 Don’t overthink code decisions too much. At the end of the day, this is usually not that crucial, and this is not what defines you and your team. Make sure things are clean and consistent, but don’t fall into bikeshedding.
✅ Make sure to consider tests and documentation from the start, and all the way through. They are not sprinkles on top of the cake. They should be an essential part of your actual output, and they help tremendously down the line.
Finally, enjoy what you do, and make sure other people do too. We spend so much time at work. And even when we’re not behind the desk, work is somewhat at the back of our mind. Make sure that time counts.
]]>The purpose of technical documentation is to help people, of any level, perform the task at hand. It should serve as a knowledge base and a guide. It should explain how things work and how to do things in a given project.
Having said that, the first thing I personally feel strongly about when it comes to documentation is that it is not a history document. It should not tell the tale of how things were back in the days™, or how they will eventually maybe hypothetically be in a distant future. It should describe the current state of things only.
Similarly, technical documentation is not a litterary essay aiming as entertaining its readers with anecdotes and jokes. It should be straightforward and efficient. Contrary to a blog of some sort, documentation is not centered around its author but around its readers.
Speaking of the author, I believe it should be non-existent. The personal experience and viewpoint of the author is irrelevant to the purpose of technical documentation. I recommend using “we” or “you” all over, depending on whichever you prefer or is more suited to the content. Remember that the more consistent the better.
The biggest problem of technical documentation—probably after the lack thereof—is keeping it up-to-date. You know how it goes: someone writes a handy guide for a feature of some sort. Then, months later, someone else changes the way the feature works, but completely forgets about the documentation. Then months later, someone finds the documentation to update the feature again, but it is completely obsolete and makes no sense whatsoever. And everybody’s sad.
N26, like many companies, uses the Atlassian tools suite, including Confluence for documentation. Most of the company’s documentation lives there so it can be searched, used and most importantly audited. That mostly matters for topics that are likely to be audited such as product, legal, compliance, data privacy, banking regulations, security and such.
Having said that, let me share my personal opinion: Confluence is absolute garbage when it comes to technical documentation. It is clumsy to write on Confluence. It is cumbersome to read on Confluence. It is slow, it is ugly, it is far away from the code… That’s why we (the web team) decided from day 1 not to use Confluence for our technical documentation.
We keep our documentation on GitHub, alongside the code. The web platform is stored in a single repository, which makes it even easier. We have a docs
folder at the root level which contains all our documentation in Markdown format. A few benefits to that:
docs
comes before src
alphabetically, documentation comes up first in search results.To make it even less likely to forget updating documentation, we also mention it in our GitHub pull-request template, so we have an extra reminder when submitting a pull-request for review.
Documentation is only as useful as it is read. There is no point having the best docs in the world if nobody knows it exists. So it’s important not only to emphasise on the fact that documentation is a first-class citizen like code, but also to make it available to everyone.
In order not to restrict access to GitHub users, we decided to build it with Gitbook and publish it on a route of our testing servers. The nice thing about Gitbook is that it comes with a search engine, a soft design, a robust navigation system and some accessibility features out of the box.
Somewhere in our deployment pipeline, we run the following command:
npx --package gitbook-cli gitbook build . build/docs
And our Express server has the following route:
if (!LIVE) {
server.use('/docs', express.static('build/docs'))
}
And voilà, look at this beauty:
Once our documentation is up and running, it is important to actively promote it. Getting-started guides and related READMEs should mention and link to it. The documentation itself should be generous with links to other parts of itself, cross-referencing pages to encourage people to browse through it.
When answering someone’s question, it is a good idea to join a link to the relevant section of the documentation if it contains the answer or related content. And if it doesn’t, this is likely to be a good opportunity for an addition.
Similarly, I recommend openly announcing (in an organisation tech channel of some sort) when new pages are being added to the hub. This contributes to growing the influence of the documentation hub within the company.
So if I had to sum up, here is a TL;DR of what I would recommend in regard to documentation:
Documentation is a living organism. Our hub is 3 years old and keeps growing on a daily basis. As of writing, it is almost 60,000 words (or the equivalent of ~200 pages book) spread across about 60 Markdown files.
The more people rely on it, the better its quality as there are more and more authors and maintainers. It is everyone’s responsibility to keep it alive and healthy, from the new-comer to the most senior person on the team. Everyone reads docs, ergo everyone should write docs.
Oh, and in case you came for the memes, you’ll be pleased to know that this has been in our README for pretty much ever:
]]>Unfortunately for us, CSS does not provide trigonometry functions yet (although there are plans to implement them), so we have to rely on another language for that. We have three options:
Let’s start with some simple markup:
<div class="container">
<div class="ribbon">Express</div>
</div>
And some basic styling (without all purely aestethic considerations):
/**
* 1. Positioning context for the ribbon.
* 2. Prevent the edges of the ribbon from being visible outside the
* box.
*/
.container {
position: relative; /* 1 */
overflow: hidden; /* 2 */
}
/**
* 1. Start absolutely positioned in the top right corner of the
* container.
* 2. Horizontal padding is considered in the ribbon placement.
* The larger the ribbon (text + padding), the lower in the
* container it might have to be.
* 3. Make sure the content is centered within the ribbon itself.
* 4. Position the ribbon correctly based on its width, as per
* the following formula: `cos(45 * π / 180) * 100%`.
*/
.ribbon {
position: absolute; /* 1 */
top: 0; /* 1 */
right: 0; /* 1 */
padding: 0 2em; /* 2 */
text-align: center; /* 3 */
transform: translateY(-100%) rotate(90deg) translateX(70.71067811865476%) rotate(
-45deg
); /* 4 */
transform-origin: bottom right; /* 4 */
}
This transform
declaration is quite a mouthful. Let’s apply it left to right and see the result at every step to try and make sense of it. We start with the ribbon absolutely positioned in the top right corner of the container.
translateY(-100%)
translates the ribbon on its Y axis by its height.rotate(90deg)
rotates the ribbon 90 degrees clockwise to inverse its axis.translateX(70.71067811865476%)
translates the ribbon vertically (axes have been swapped) by cos(45)
(while remembering that math functions expect radians, not degrees).rotate(-45deg)
rotates the ribbon 45 degrees counter-clockwise to orient it correctly. overflow: hidden
on the container is enough to clip these corners.That’s it! ✨ What is nice with this solution is that tweaking the horizontal padding or the text content will automatically preserve the ribbon in the corner as expected. No need to change anything!
Feel free to play with the interactive demo on CodePen.
]]>As mentioned before, we fully support the absence of JavaScript (thanks to a carefully creafted server-side rendering solution). This brings an interesting challenge: how to work with Apollo GraphQL when JavaScript is not available? That is what we’ll cover in that article.
To work with Apollo in React, we use react-apollo
. This driver provides useQuery
and useMutation
hooks which are very handy to communicate with our Apollo server through React components. A simple example might look like this:
import { useMutation } from 'react-apollo'
const MUTATION = 'mutation removeEntry ($id: ID!) { removeEntry(id: $id) }'
const RemoveEntryButton = props => {
const [removeEntry] = useMutation(MUTATION)
const handleClick = () => removeEntry({ variables: { id: props.id } })
return (
<button type='button' onClick={handleClick}>
Remove entry
</button>
)
}
When interacting with the button, the mutation is sent to the Express server by firing an AJAX request (that’s what useMutation
does) with everything necessary for apollo-server-express
to handle it.
The problem is that when JavaScript is disabled or unavailable, the button ends up doing nothing. We could remove the button, but that means the feature altogether doesn’t work with JavaScript. No good!
Before the web became a wasteland of abandoned JavaScript frameworks, forms were all the hype to perform actions on web pages. So if we want to provide our features when JavaScript is not available, we need to render forms, fields, inputs and buttons. Our server needs to accept and treat these requests, then redirect back to the correct URL.
Originally, we used to duplicate our GraphQL logic into individual REST endpoints. So if we had a removeEntry
mutation, we used to have a /remove-entry
Express route just for no-JavaScript. Needless to say, that was not a scalable solution.
Instead, my amazing colleague Mike Smart came up with an original solution: communicating with the GraphQL endpoint through HTML forms. We could keep things the way they are when JavaScript is enabled, and properly submit the form itself when JavaScript is not available. On the server, if it looks like it was coming from a form, we manually handle our request with Apollo.
Here is what our new MutationForm
component looks like (with code comments for explanation):
import { useMutation } from '@apollo/react-hooks'
import gql from 'graphql-tag'
import serialize from 'form-serialize'
const MutationForm = props => {
const [mutate] = useMutation(gql(props.mutation))
const formRef = React.useRef()
const handleSubmit = event => {
// When submitting the form with JavaScript enabled, prevent the
// default behaviour to avoid a page refresh.
event.preventDefault()
// Call the mutation with the serialised form for variables, then
// redirect to the correct path accordingly.
mutate({ variables: serialize(formRef.current, { hash: true }) })
.then(() => window.history.pushState(null, null, props.successPath))
.catch(() => window.history.pushState(null, null, props.failurePath))
}
// Render a <form> with a ref to be able to serialise it, and a
// few hidden fields to hold the mutation and the redirect paths.
return (
<form action='/graphql' method='POST' ref={formRef} onSubmit={handleSubmit}>
<input type='hidden' name='__mutation' value={props.mutation} />
<input type='hidden' name='__successPath' value={props.successPath} />
<input type='hidden' name='__failurePath' value={props.failurePath} />
{
// Mutation-specific fields, as well as the submit <button>
// are up to the component to render.
props.children
}
</form>
)
}
Then we can rewrite our RemoveEntryButton
as follow. Note how we now provide the id
as a hidden input within our form, and how the button has type="submit"
.
const MUTATION = 'mutation removeEntry ($id: ID!) { removeEntry(id: $id) }'
const RemoveEntryButton = props => (
<MutationForm mutation={MUTATION} successPath='/' failurePath='/'>
<input type='hidden' name='id' value={props.id} />
<button type='submit'>Remove entry</button>
</MutationForm>
)
A typical integration between Apollo and Express might look like this:
const express = require('express')
const bodyParser = require('body-parser')
const { ApolloServer, makeExecutableSchema } = require('apollo-server-express')
const { typeDefs, resolvers } = require('./schema')
const app = express()
const schema = makeExecutableSchema({ typeDefs, resolvers })
const server = new ApolloServer({ schema, uploads: false })
app.use(bodyParser.urlencoded())
server.applyMiddleware({ app })
app.listen(8081, () => console.log(`🚀 Server ready at ${server.graphqlPath}`))
What we are going to need is a custom GraphQL middleware (handleNoJavaScriptGraphQL
). We are going to insert it before setting up ApolloServer, so that if our middleware doesn’t need to do anything (when the request comes from useMutation
with JavaScript), it can forward it to ApolloServer:
app.use(bodyParser.urlencoded())
app.post('/graphql', bodyParser.json(), handleNoJavaScriptGraphQL(schema))
server.applyMiddleware({ app })
Our middleware should do a few things. First, it should detect whether the request comes from a client-side request, or the form submission (basically whether or not JavaScript was available).
If the request was performed with JavaScript, there is nothing more to do. ApolloServer
will treat the request as always.
If the request comes from the form submission, it needs to call Apollo directly (with the undocumented but stable and exported runHttpQuery
function), passing it all the necessary information to perform the mutation. Then, depending on the result of the mutation, it will redirect to the success URL or to the failure one.
const { runHttpQuery } = require('apollo-server-core')
const handleNoJavaScriptGraphQL = schema => (request, response, next) => {
const {
__mutation: query,
__successPath: successPath,
__failurePath: failurePath,
...variables
} = request.body
// Pick the `MutationForm`’s hidden fields from the request body. If
// they happen to be absent, return early and call `next`, as this
// means the request was performed with JavaScript, and this
// middleware has no purpose.
if (!query || !successPath || !failurePath) {
return next()
}
// Pass the schema, the mutation and the variables to Apollo manually
// to execute the mutation.
return runHttpQuery([request, response], {
method: request.method,
options: { schema },
query: { query, variables },
})
.then(({ graphqlResponse }) => {
const { data } = JSON.parse(graphqlResponse)
const operationName = Object.keys(data)[0]
const url = !data[operationName] ? failurePath : successPath
// CAUTION: be sure to sanitise that URL to make sure
// it doesn’t redirect to a malicious website.
return response.redirect(url)
})
.catch(error => response.redirect(failurePath))
}
That’s it! We managed to issue and handle a mutation with Apollo without having JavaScript available in the browser. All we did was submitting all the necessary information for Apollo in a HTML form, and process it ourselves on the server.
It took us a bit of head-scratching to come up with a way to send potential errors back to the page. Originally, we prototyped passing them as part of the URL when redirecting back to the failure path. This was not ideal for several reasons, privacy and security being the most important ones.
We ended up serialising (and encrypting in our case, but this is not a required step) the outcome of the mutation and storing it in a cookie. Then, after we redirect back to the failure path, we read that cookie on the server, and pass it in a React context, then delete the cookie. From there, the React tree can read the errors from the React context and render them.
In this article, we cover only the very basics to make it possible to use Apollo without necessarily relying on client-side JavaScript. That being said, a lot can be done to go further that route. Here are a few suggestions.
⚙️ When client-side JavaScript is available and we do not go through a page render after a mutation, it might be handy to refetch some GraphQL queries. To do so, we can make the MutationForm
accept an options
prop that is passed to Apollo.
-mutate({ variables })
+mutate({ ...props.options, variables })
⏳ It is commonly advised to visually represent that an action is taking place through a loading state (when client-side JavaScript is present). We can modify our handleSubmit
handler to save that state.
const [isLoading, setIsLoading] = React.useState(false)
const handleSubmit = event => {
event.preventDefault()
setIsLoading(true)
mutate({ variables: serialize(formRef.current, { hash: true }) })
.then(() => window.history.pushState(null, null, props.successPath))
.catch(() => window.history.pushState(null, null, props.failurePath))
.finally(() => setIsLoading(false))
}
We can then pass that state to the React children by expecting a function instead of a React tree.
props.children({ isLoading })
While let us re-author our RemoveEntryButton
as such:
<MutationForm>
{({ isLoading }) => (
<button type='submit' aria-disabled={isLoading}>
{isLoading && <Loader />}
{isLoading ? 'Removing entry…' : 'Remove entry'}
</button>
)}
</MutationForm>
This entire concept required some outside-the-box thinking, but it enabled us to keep offering a JavaScript-less experience in a scalable way. We get no-JS support basically out-of-the-box by simply using our MutationForm
component. Totally worth it. ✨
JavaScript is fickle. It can fail to load. It can be disabled. It can be blocked. It can fail to run. It probably is fine most of the time, but when it fails, everything tends to go bad. And having such a hard point of failure is not ideal.
In the last few years, we have seen more and more ways to build highly interactive web applications relying almost exclusively on JavaScript. To the point where we almost wonder whether we forgot from where we come from. Not so long ago was a time was JavaScript was just sprinkled on top of web pages to have custom cursors and cool sound hover effects. But I digress.
The N26 web platform is built on React. One interesting thing about React as a library is that it can run seamlessly on the client as well as the server. In other word, generating HTML to send to the client is not only feasible, it’s also relatively easy.
So here is the gist: we render the React tree on the server as a string, sends it to the client. Once the browser is done downloading, parsing and executing the JavaScript bundles, the web page behaves as a single page application: HTTP calls are performed with AJAX, links are simulated with the History API and everything should work without having to refresh the browser at all.
Here is the thing though: we cannot expect the experience to be the same with and without JavaScript. That’s just not possible. JavaScript enables so many possibilities that the JavaScript-less experience will always feel worse in some ways.
Therefore it’s important not to try making the no-JS experience work like the full one. The interface has to be revisited. Some features might even have to be removed, or dramatically reduced in scope. That’s also okay. As long as the main features are there and things work nicely, it should be fine that the experience is not as polished.
This title is a bit of a misnomer, because we don’t really want to detect whether JavaScript is enabled. We want to detect that all the following events have successfully happened:
Thankfully, React makes it trivial to detect all that: when a component has mounted, we can store on the state that it is ready, and from there we know that JavaScript is available since components don’t actually mount on the server.
const MyComponent = props => {
const [hasJavaScript, setHasJavaScript] = React.useState(false)
React.useEffect(() => setHasJavaScript(true), [])
return (
<>
{hasJavaScript ? (
<p>This will not render on the server, only on the client when JavaScript is finally available.</p>
) : (
<p>This will render on the server, and on the client until JavaScript is finally available.</p>
)}
</>
)
}
To avoid using a local state and a useEffect
hook in every component that needs to know whether JavaScript is available, my amazing colleague Juliette Pretot suggested we do it at the top-level, and then provide that information through the React context.
export const HasJavaScriptContext = React.createContext(false)
const App = props => {
const [hasJavaScript, setHasJavaScript] = React.useState(false)
React.useEffect(() => setHasJavaScript(true), [])
return (
<HasJavaScriptContext.Provider value={hasJavaScript}>
{props.children}
</HasJavaScriptContext.Provider>
)
}
Then components can read that value from the context:
const MyComponent = props => {
const hasJavaScript = React.useContext(HasJavaScriptContext)
return (
<>
{hasJavaScript ? (
<p>This will not render on the server, only on the client when JavaScript is finally available.</p>
) : (
<p>This will render on the server, and on the client until JavaScript is finally available.</p>
)}
</>
)
}
One slight inconvenience with the aforementionned solution, is that the no-JavaScript version is visible while the JavaScript bundles get downloaded, parsed and executed. In a way, that’s the entire point, so that if they fail to be, the page remains usable. However, that’s sometimes a little awkward when the no-JavaScript and the JavaScript versions are visually quite different.
To try improving the user experience, my other amazing colleague Alina Dzhepparova started experimenting with an addition to our solution, still making no asumption whether the user wants JavaScript, let alone whether they are a good enough network to download it.
When a user visits one of our web pages for the first time, and provided their browser is executing JavaScript properly, we set a flag in a cookie. During subsequent visits, we retrieve that cookie on the server and prefill the HasJavaScriptContext
with the correct value. This way, we can render the JavaScript version right away, although it only becomes fully usable once bundles finally kick in.
For users with JavaScript turned off, but with the cookie flag set somehow (from a former visit), a <meta http-equiv='refresh' />
with a <noscript>
tag gets added to the document <head>
.
${props.hasJavascriptCookie
? `
<noscript>
<meta http-equiv='refresh' content='0; url=/js' />
</noscript>
`
: ''}
This meta tag redirects to an Express route (simplified below), where the cookie is deleted and the user is redirected back to the page they were on, thus causing the process to start again.
server.get('/js', () => {
response.clearCookie('expects_javascript').redirect('back')
})
We track all JavaScript errors by sending some logs to our aggregator. Over the months, we realised we had an impressively high amount of errors coming from Internet Explorer 11, despite using Polyfill.io to provide unsupported features.
While we do manage to recover from client-side JavaScript errors, we decided to route our Internet Explorer traffic to our no-JS version. On the server, we use ua-parser-js to (hopefully) detect the browser; if it is Internet Explorer, we no longer render JavaScript bundles, effectively simulating the no-JavaScript experience.
We realise it is an arbitrary and opinionated decision to make on behalf of the user, but we also believe a basic working experience is better than a fully broken one.
]]><a>
) leads to somewhere. A button (<button>
) performs an action. It’s important to respect that convention.
Now, in single page applications, things are bit more blurry because we no longer follow links which cause a page to reload entirely. Links, while still changing the URL, tend to replace the part of the page that changed. Sometimes, they might be replaced entirely by an inline action.
At N26, we have a pretty unique challenge: we support almost all of our features with and without JavaScript (thanks to server-side rendering). This implies that a lot of links should become buttons when JavaScript is enabled and running. To avoid authoring ternaries all over the place, we have a single component capable of rendering both links and buttons depending on the given props. We call it Action
.
Our line of reasoning to determine what to render is as follow: if we have an href
prop, we should render a link (<a>
element), otherwise we should render a button. It would look like this:
const Action = props => {
const Component = props.href ? 'a' : 'button'
return <Component {...props} />
}
If like us, you use client-side routing such as react-router
, you might also want to render a Link
component to render router links when the to
prop is provided.
import { Link } from 'react-router-dom'
const Action = props => {
const Component = props.to ? Link : props.href ? 'a' : 'button'
return <Component {...props} />
}
Then, we can have a link changing into a <button>
when JavaScript eventually kicks in:
const MyComponent = props => {
const [isMounted, setIsMounted] = React.useState(false)
React.useEffect(() => setIsMounted(true), [])
return (
<Action
href={isMounted ? undefined : '/about'}
onClick={isMounted ? props.displayAboutDialog : undefined}
>
Learn more about us
</Action>
)
}
The technique G201 of the WCAG asks that each link opens in a new tab has:
To achieve that, we can render a small icon with an associated label stating “(opens in a new tab)”. The resulting markup would look like this:
<a href="/about" target="_blank" class="link">
Learn more about us
<svg aria-hidden="true" focusable="false" xmlns="https://www.w3.org/2000/svg" viewBox="0 0 32 32" ><path d="M22 11L10.5 22.5M10.44 11H22v11.56" fill="none"></path></svg>
<span class="sr-only">(opens in new tab)</span>
</a>
For the sake of simplicity, let’s assume we have an Icon
component that rends a SVG, and a VisuallyHidden
component that renders hidden accessible text.
const Action = props => {
const Component = props.to ? Link : props.href ? 'a' : 'button'
return (
<Component {...props}>
{props.children}
{props.target === '_blank' && (
<>
<Icon icon='new-tab' />
<VisuallyHidden>(opens in a new tab)</VisuallyHidden>
</>
)}
</Component>
)
}
We can also extract this logic into its own little component to make the JSX of our Action
component a little easier to read:
const NewTabIcon = props => (
<>
<Icon icon='new-tab' />
<VisuallyHidden>(opens in a new tab)</VisuallyHidden>
</>
)
When following a link using target='_blank'
, the other page can access the window
object of the original page through the window.opener
property. This exposes an attack surface because the other page can potentially redirect to a malicious URL.
The solution to this problem has been around for pretty much ever and is to add rel='noopener'
or rel='noreferrer'
(or both) to the links opening in a new tab so the window.opener
object is not accessible.
To make sure never to forget these attributes, we can bake this logic in our Action
component.
const Action = props => {
const Component = props.to ? Link : props.href ? 'a' : 'button'
const rel = props.target === '_blank'
? 'noopener noreferrer'
: undefined
return (
<Component {...props} rel={rel}>
{props.children}
{props.target === '_blank' && <NewTabIcon />}
</Component>
)
}
If we want to be able to pass a custom rel
attribute as well, we can extract this logic in a small function:
const getRel = props => {
if (props.target === '_blank') {
return (props.rel || '') + ' noopener noreferrer'
}
return props.rel
}
The default value for the type
attribute on a <button>
element is submit
. This decision comes from a time where buttons were almost exclusively used in forms. And while this is no longer the case, the default value remains. Therefore, it is recommended to always specify a type
to all <button>
elements: submit
if their purpose is to submit their parent form, button
otherwise.
As this can be a little cumbersome, we can bake that logic in our component once again:
const Action = props => {
const Component = props.to ? Link : props.href ? 'a' : 'button'
const rel = getRel(props)
const type = Component === 'button' ? props.type || 'button' : undefined
return (
<Component {...props} rel={rel} type={type}>
{props.children}
{props.target === '_blank' && <NewTabIcon />}
</Component>
)
}
One of the reasons why people tend to use links when they should use a button, or buttons when they should use a link is because they think in terms of styles, rather than semantics.
If the design in place instructs to render a link to another page as a button, an uninformed (or sloppy) developer might decide to use a button, and then use some JavaScript magic voodoo to redirect to the new page.
By making our component themable, we can provide a styling API without injuring the underlying semantics. For our example, we’ll consider two HTML classes, button
and link
, styling like a button and like a link respectively.
const Action = props => {
const Component = props.to ? Link : props.href ? 'a' : 'button'
const rel = getRel(props)
const type = Component === 'button' ? props.type || 'button' : undefined
const className = [
props.className,
props.theme === 'LINK' ? 'link' : 'button'
]
.filter(Boolean)
.join(' ')
return (
<Component {...props} rel={rel} type={type} className={className}>
{props.children}
{props.target === '_blank' && <NewTabIcon />}
</Component>
)
}
Then we can render a button, styled as a link:
const MyComponent = props => (
<Action theme='LINK' type='button' onClick={toggle}>Toggle</Action>
)
Or a link, styled as a button:
const MyComponent = props => (
<Action theme='BUTTON' href='/about'>Learn more about us</Action>
)
Note how we preserve any provided className
so it becomes possible to give our component a class name on top of the one used by the component itself for styling.
const MyComponent = props => (
<Action theme='BUTTON' href='/about' className='about-link'>
Learn more about us
</Action>
)
Our Action
component holds even more logic (especially around webviews), but that is no longer relevant for our article. I guess the point is that anything that is important for accessibility or security reasons should be abstracted in a React component. This way, it no longer becomes the responsibility of the developer to remember it.
In this post, I want to show a teeny-tiny React component to make it more explicit and convenient to use the original utility class.
const VisuallyHidden = ({ as: Component, ...props }) => (
<Component {...props} className="sr-only" />
)
VisuallyHidden.defaultProps = {
as: 'span'
}
And here is how you would use it (taking the example from Accessible page title in a single-page React application).
const TitleAnnouncer = props => {
const [title, setTitle] = React.useState('')
// More React code…
return <VisuallyHidden as='p' tabIndex={-1}>{title}</VisuallyHidden>
}
A few comments about the component:
Depending on the way you author styles in your application, you could author the relevant styles differently (pure CSS, inline styles, CSS-in-JS…).
The as
prop is intended to provide a way to change the underlying DOM element that is rendered. We found that span
is a good default in most cases, but you might want a p
(like we do in our example), a div
or something else.
Finally, we spread the props so that it is possible to pass other DOM attributes to the underlying element (e.g. tabIndex
). Note that we spread before the className
prop so we don’t inadvertently override it.
Feel free to play with the code on CodeSandbox.
]]>Traditionally, following a link causes the page to reload with the content of the new page. This makes it possible for screen-readers to pick up on the new page title and announce it.
With single-page applications using a JavaScript-powered routing system, only the content of the page tends to be reloaded in order to improve the perceived performance of the page.
In this article, I will share what I learnt from Temesis and how to make sure the title of your React SPAs is accessible to assistive technologies.
We will build a teeny-tiny React application with react-router
and react-helmet
. Our application will consist of:
The main idea is that every page will define its own title. The page title announcer listens for page changes, stores the page title and renders it in a visually hidden paragraph which gets focused. This enables screen-readers to announce the new page title.
You can already look at the code on CodeSandbox.
To begin with, let’s create our page components. Each page is a simply React component rendering a <h1>
element, and a <title>
element with react-helmet
.
import React from 'react'
import { Helmet } from 'react-helmet'
const Home = () => (
<>
<h1>Home</h1>
<Helmet>
<title>Home</title>
</Helmet>
</>
)
const About = () => (
<>
<h1>About</h1>
<Helmet>
<title>About</title>
</Helmet>
</>
)
const Dashboard = () => (
<>
<h1>Dashboard</h1>
<Helmet>
<title>Dashboard</title>
</Helmet>
</>
)
Now, let’s create a top-level component which will handle the routing to these different pages. To keep it simple, let’s take it (almost) as is from the basic example of react-router
. It is our <TitleAnnouncer>
component (described in the next section), a navigation, and a router.
const Root = () => (
<Router>
<>
<TitleAnnouncer />
<nav role='navigation'>
<Link to='/'>Home</Link>
<Link to='/about'>About</Link>
<Link to='/dashboard'>Dashboard</Link>
</nav>
<hr />
<Switch>
<Route exact path='/'>
<Home />
</Route>
<Route path='/about'>
<About />
</Route>
<Route path='/dashboard'>
<Dashboard />
</Route>
</Switch>
</>
</Router>
)
The last missing piece of the puzzle is the actual title announcer. It does a few things:
.sr-only
class).tabIndex={-1}
).import React from 'react'
import { useLocation } from 'react-helmet'
import { Helmet } from 'react-helmet'
const TitleAnnouncer = props => {
const [title, setTitle] = React.useState('')
const titleRef = React.createRef()
const { pathname } = useLocation()
const onHelmetChange = ({ title }) => setTitle(title)
React.useEffect(() => {
if (titleRef.current) titleRef.current.focus()
}, [pathname])
return (
<>
<p tabIndex={-1} ref={titleRef} className='sr-only'>
{title}
</p>
<Helmet onChangeClientState={onHelmetChange} />
</>
)
}
That is all that is needed to handle page titles in an accessible way in a single-page React application. The react-router
and react-helmet
libraries are not necessary either, and the same pattern should be applicable regardless of the library (or lack thereof) in use.
Note that if you have a simple application and can guarantee there is always a relevant <h1>
element (independently of loading states, query errors and such), another, possibly simpler solution arises. It should be possible to skip that hidden element altogether, and focus the <h1>
element instead (still with tabIndex={-1}
). This solution could not scale for us as we have hundreds of sometimes complex and dynamic pages, some with a visible <h1>
element, some with a hidden one, and so on.
Feel free to play with the code on CodeSandbox.
]]>In this article, I would like to share everything I have learnt about online applications for software development jobs. Please, keep in mind that my point of view is the one of someone who is hiring for a diverse team in a fast growing startup. I’ve never worked in Human Resources, and I have so many privileges that I haven’t had to make a resume in the last decade so, as always, your mileage may vary.
This is going to be a long one, so I broke it down into pieces so you can jump to a specific session more conveniently. The next section is a big summary of the article, and every section also contains its own little TL;DR at the end.
✨ First impression is everything. Make sure your resume stands out nicely. Work on the presentation and the appearance. Beware of typos, inconsistencies and design flaws. Keep it within a page or two.
👋 Introduce yourself honestly, and provide only the relevant details about yourself. Remove superfluous information that risk cluttering your resume.
🔗 Make sure any link your provide works and helps the reviewer understands your profile. Skip anything private or irrelevant.
👩💻 Focus on your core skills and the ones you want your job to be about. Do not emphasise on skills you are not interested in to make sure it is clear what you are all about.
👩🎓 Do not overdo the section on your education. Mention your level of education, and any diploma or certifications you have, but feel free to skip superfluous information.
🏃♀️ Be straightforward with your professional experience. Mention the main few things you accomplished, and skip anything barely relevant or anecdotal. The more focused, the better.
🌱 Give something more to your tech profile. You are not your tools, and it is important to show that there is something about you outside of writing code.
📝 If you want to attach a cover letter, make sure it is tailored to the company you apply for, and is a helpful complement to your resume.
Many companies, especially tech startups, are constantly hiring. The job ads are basically always up, and the pipeline gets screened on a weekly or even daily basis. That means there are dozens, if not hundreds of applications for some tech jobs. At the peak of hiring for our web developer position, I used to screen about 50 profiles a week. That is a lot.
That means the way your resume looks is absolutely critical. Now, I understand that not everyone has the time, the energy or the will to design their own resume. Fortunately, that is not necessary, because the internet is full of free or cheap CV templates that are ready to be filled.
What is important is that it looks clean, tidy and properly structured. The hierarchy of information needs to stand out (often helped by breathing space), and it should be inviting—which means it shouldn’t be 10 pages long, one or 2 pages at most is good. When I open the resume for the first time, I should say “Ah!” not “Oooh…”.
Be careful with typos. Similarly, be weary with consistency. If you are going to write “frontend” with an hyphen, or “React.js” (although this is no longer correct), do so all the way through. Pay attention to capitalisation: it’s “JavaScript”, not “javascript”; “Sass”, not “SASS”; “git” not “GIT”, and so on. Along the same line, pay attention to the punctuation: if you end a sentence with a fullstop, do the same for all the others sentences.
All of this may sound silly, although what you want to avoid is to have these things distracting from the actual content. Every typo, every inconsistency, every minor mistake, every design glitch is something that catches the eye when it should not. And for a reviewer as nitpicky as I am, that’s not ideal.
TL;DR
A resume is supposed to introduce you, so it obviously needs the very basic information about you. Something interesting is that the information that is relevant to the Human Resources department is unlikely to be the same as for me, a technical reviewer.
Things I, a tech reviewer, care about: the name you use, the pronouns you use, the country you currently live in. Things I don’t care about: your official legal name, the sex you were assigned at birth, your fully qualified current address, your age, your marital status and whether or not you own a driving license.
I think the opinions vary when it comes to whether or not a resume should include a picture. My personal opinion is that it probably should not. At best, it brings nothing, at worse it risks interfering with the content of the resume.
I have seen candidates include a few short sentences about themselves and what they are looking for. I believe this is a nice touch, as this gives immediate information in a digestible way.
TL;DR:
Let’s briefly talk about links to social networks:
TL;DR:
Your resume is not the package.json of your last project. It cannot be just a collection of languages, libraries and frameworks. You are more than your tools.
Focus on your top skills. And by top skills, I mean the ones you either excel in, or you want your next job to be about. Knowing you did a course of Visual Basic in college 7 years ago has little to no value for me. It certainly won’t have any for the company if you apply for building React applications or working on Rails systems.
Give it a deep thought: what is it you’re passionate about and/or you are—comparatively to the rest of your skills—talented at? It should be a few skills top, not dozens. Statistically speaking, one doesn’t get to be an expert in many topics, especially not in the course of just a few years.
Myself, I would emphasise on HTML, CSS, React, accessibility and documentation (not necessarily in that order). Sure, I have some knowledge in Jenkins, Bash and PHP but a) I don’t want my job to focus on these and b) these are not my strong suit.
Now I know charts, diagrams and gauges are trendy and colorful, but they really shouldn’t be used to represent your skills. Competency is not absolute. There is no such thing as “100% in CSS” or “5 stars in JavaScript” or “8/10 in React”. There rarely is a way to measure skills in a strict and deterministic manner. We happen to be talented in something comparatively to our other skills or to other people. Therefore, charts are not the best way to showcase this information.
I would recommend listing your core skills in the order you care about, and accompany them with an explicit level or a full on explanation. For instance, in my case:
- Accessibility (steady theoretical knowledge)
- Documentation (tech-writer and passionate)
- React (main JS framework for the last few years)
- CSS (authored a book about it)
- HTML (knowledgeable)
Reading this tells the reviewer that: these constitude my area of expertise to the level described along with it, and I want this to be the focus of my job.
I would also warn against listing skills such as operating systems, tools suites like Office, and unrelated tools like Adobe InDesign or Gimp. When applying in international companies, be sure to mention the languages you speak though, and at which level.
TL;DR:
The tech industry is, comparatively to other industries, quite self-made. A lot of people taught themselves web development, sometimes alongside one (or more) job to change career entirely. That is one of the beautiful thing about our industry (if only we were not chasing our most vulnerable members away, but that’s for another article).
This also means the need for formal education is lesser than in some other industries or part of our industries. Unless you are applying at big name tech companies or you didn’t have the chance to have much experience yet, whether you own a Computer Science degree will be of little importance. Now that is probably still worth mentioning your education level, but I would not overdo it with a comprehensive list of your entire cursus, with location and whatnot. In my experience this is not really needed in most cases.
TL;DR:
If our job is not about diplomas, it is about experience. That’s sometimes difficult for entry-level developers, who have to alleviate with other things such as education and side-projects. But for the rest of us who had at least some work experience, our resume should be mostly about this.
If you have been working for a long time and happen to have a lot of experience, stay focused. List the last few positions you held, do not go all the way back to that internship you did in college. Your engineering profile from 2010 is likely to be so different and outdated that it is not relevant to that position you are applying for.
For each held position, state the name of the company, the role you occupied and its time frame. I would advise against explaining what the company does as this information should be easily available on the company’s website. I am also not a fan of mentioning the tech stack, unless very relevant. I would expect a frontend developer to master the main languages of the web; the details of the tech stack itself are not very relevant in my opinion.
Then, list a few of your responsibilities and accomplishments. There again, stay focused. Do not list that one time you had to write an Excel formula, it doesn’t matter. Write about what was the core of your job, and things you are proud of. They are the things you will bring with you in your next job, and they are the one that matter.
For my experience at N26, it might have been:
N26 — Web Tech Lead (September 2016 – present)
- Created and led development on what is now the web platform for 3M users
- Initiated engineering practices around accessibility and its testing
- Authored large parts of the (mostly tech) documentation for the web platform
- Designed the testing setup and deployment pipeline for the web platform
- Contributed to the hiring process and conducted interviews for all web engineers
- Performed onboarding, code reviews and pair-programming sessions with web engineers
Interesting point to make is that the things I mention about my job at N26 somehow match the core skills I listed above. There again, it gives the reviewer a sense of what kind of developer I am, and what I am looking for.
TL;DR:
Now, this is the tricky part. You might have done everything else by the book (if there is even such a thing), but that might not have been quite enough… Maybe, it’s because it lacks a little something that makes your profile stand out.
We are more than the sum of our skills. Coding is the easy part. We are human beings, and most of our job is about working with other human beings. It’s important to show that we are able to work in a team, to care for one another, to display empathy.
There are a few things I consider positive signs: any work dedicated to make a team better (as an acting facilitator for instance), conducting workshops, onboarding engineers, writing substantial documentation, doing humanitarian work, volunteering, teaching classes, organising events… The list goes on. And while it’s not a perfect indicator, it usually says something important: there is more to your profile than a list of technologies and a couple of roles in tech companies.
TL;DR:
There seems to be mixed opinions about whether or not to join a cover letter to a resume. In my opinion, a cover letter is only helpful when done properly and personalised, otherwise it just looks sloppy and uninspired. So my recommendation would be to only join a letter if you can afford the time to make it relevant and helpful for your application. Otherwise, feel free to skip it.
Things a cover letter are not: snippets of your resume, a link to your website, a checklist of your skills. Things a cover letter should do: complement your resume, give insights on who you are, tell why you like the company and think you would fit.
Interesting letters I have read started by talking a bit about who they are. Then explain how they learnt about the company/position. Finally, mentioned why they would like to join (new challenge, working on the product, learning new tech, etc.). Kindness and humour is also nice of course.
Also, final pet-peeve of mine: please don’t start your letter with “Dear Sir or Madam”, let alone “Dear Sir”. I’m the non-binary person that will review your application. You might not want to misgender me on the 2nd word of your letter. Prefer neutral forms such as “To whom it may concern”, “Dear <company>”, or something more casual (depending on the company you apply for).
TL;DR:
That is pretty much all I have to share on this topic. I hope this helps you land your next job. And if you have any question or comment, as always, feel free to get in touch on Twitter. ✨
]]>Seeing all these decade-in-review articles and Twitter threads made me want to reflect on those last 10 years and see what I’ve accomplished (and failed to accomplish).
Ten years ago, I had just turned 18. I was living with my parents in Grenoble (France), and was in my first year after high-school. I had no idea what I wanted to do. At that time, playing video games was basically all I was interested in. Your typical boring shy nerd.
Obviously, a lot has happened over the last 10 years for me. People say one’s twenties are the most active years of one’s life. Yet, if you’d had told me 10 years ago where I’d be today, I would not have believed it.
I guess that’s a short and sweet recap’ of the last 10 years, focusing on the positive. Now onto 10 more!
]]>✨ February 10th. After long overdue, I took some time to redesign this website, with a softer palette and design. I went with large typography, white and pastel rose and a rather minimalistic approach. 10 months in, I still like it very much so I guess it was a success.
🔏 February 18th. After 1.5 year of working relentlessly with the security engineers at N26, I got to publish one of the company’s most comprehensive technical articles about web security at N26. A lot of work went into it, and I’m still very proud of this write-up.
🇳🇱 March 21st. In an attempt at understanding my partner’s culture and family better, I started learning Dutch on Duolingo on a daily basis. I can now read menus, signs and simple texts, and get to ask small things without struggling too much. Learning a new language is difficult, but I’m slowly getting there.
🏃♀️ August. I tried my hand (or rather, foot?) at running. I managed to run a few kilometers every weekend for a couple of months, until I stopped late October. I kind of liked it and might get back to it next year.
🇦🇹 August 9th. I have been to Vienna for the very first time for a get-away weekend. I got the great opportunity to attend a live and intimate piano performance, making for a lovely memory. All in all, I haven’t been disappointed and am looking forward to going back.
👩💻 September 1st. This day marked 3 years of me working for N26, the longest I have been in a company so far. I’ve achieved a lot, and am looking forward to keep making digital banking good and accessible to y’all.
🐟 November 15th. After a year to the day without eating meat, I stopped eating fish as well. That’s it, I am now vegetarian!
📦 December 11th. I moved into a bigger and nicer flat with my partner and our 3 cats and although it has been a stressful few weeks, it’s very nice to get to start fresh somewhere new.
🎄 December 25th. I was in Delft (Netherlands) with my partner to celebrate their birthday and Christmas alongside their family—now marking my 3rd visit to this city, which I like very much for all the food and drinks I get to have there.
We expanded our team from 9 to 25 web engineers, now in 4 different countries, still all working on a unique code base. We have yet to solve all communication challenges that arise with a cross-office working environment, but I am confident we are on a good track for that.
I kept pushing for more accessibility and diversity in the work place, organising the Accessibility Championship Program with my incredible coworker Mariia Punda. We are raising awareness about accessibility issues across teams in order to hopefully make N26 a front-runner for accessible banking.
I also invested quite some time into our deployment pipeline, namely Jenkins, in order to improve our efficiency and reliability. It’s an area I have been notoriously bad at, yet it was very interesting and enjoyable having to navigate the complexity of the matter.
Another ongoing personal achievement of my work at N26 is the documentation we have for our web platform—about which I’ve tweeted a few times. It is about 60,000 words (or the equivalent of a ~200 pages book) divided in 50+ chapters, published with Gitbook on internal servers for the entire company to have access to it.
Following 2018, I spent 2019 feeling less and less guilty about not hustling all the time. I spent less time writing, less time coding, less time on side projects, less time on open-source software. It took me years to get to a point where I feel like time spent doing something else than banking on my career is time well spent as well. I have the feeling 2020 is going to be similar.
🌱 Over the last two years, I decided to try being more healthy in general, especially when it comes to food. I stopped eating meat and fish (although reasons are mostly environmental), as well as candy and soda, and reduced my consumption of refined sugars, caffein and, to a lesser extent, alcohol. As a result, I have been feeling more healthy and less self-conscious.
🤯 Early in the year, it occurred to me that the less comfortable I am with a technical topic, the more I tend to use overly complicated words when explaining it in order to convince myself I’m familiar with it. It took some conscious effort, but I feel like I got way better at admitting I don’t know things, or that I don’t know enough to explain them.
I know it’s a bit out of scope for a yearly retrospective, but I think it’s worth mentioning what I would like to focus on for next year. If only for me.
👩🍳 In 2020, I would like to brew my own beer and make my own cheese, at least once, to know how it works and whether I like it or not. Along the same line, I hope I will keep cooking more and more as I have been doing this year.
💧 I need to drink more water. I’m not drinking enough water, and I still haven’t found a reminder that works for me. I tried a few apps, but they all got in the way somehow.
I think that’s it for this year, folks. I hope you had it good yourself, and are ready for the new year!
]]>I would be curious to learn how you approach releasing software at @n26. To be more precise, what has to happen to go from finishing something on your machine to releasing it to all customers. I would assume you have a pretty sophisticated test setup by now.
— Florian Nagel, Twitter
Note that I will be talking exclusively about the web services’ process. Backend microservices, native applications and other pieces of the N26 software architecture might have a different system in place due to their specific constraints and requirements.
When committing code on our mono-repository, code changes go through linting with ESLint and formatting with Prettier automatically via a Git commit hook.
Our linting setup is quite thorough, and includes things like static evaluation of the JSX to spot accessibility mistakes with eslint-plugin-jsx-a11y and auditing the code for security with eslint-plugin-security. Linting prevents us from a lot of silly mistakes which can happen when writing code. That’s the first line of defense.
Once the code has been committed and push to the remote repository, it has to go through a pull-request in order to be merged. No one has write-access on the master
and develop
branches, so no one can force push code to production.
We give a lot of importance to code reviews in our team: everybody is welcome to review code, suggest improvements, make comments and ask questions. Only once at least one other person has approved the code can the pull-request be merged.
While other developers review the code changes, unit tests run against the branch on Jenkins in order to make sure the code doesn’t break anything.
When the pull-request has been merged, we initiate a testing build. This mimicks a product environment (with dead code elimination, minification, production dependencies…), and run an extensive test suite:
We make sure depencencies are free of vulnerabilities by auditing them with npm audit
. If there are any vulnerabilities, the build is immediately aborted to ensure we don’t allow our dependencies to offer attack vectors into our client-side applications.
We test all our security features such as Cross-Site Resource Forgery protection, client-side encryption, brute-force protection, and so on. This is to ensure the building blocks of our web security are working as expected and never fails.
We test that all our routes return what we expect (200, 301, 302, 404…). This works by building a “routes manifest” by merging static routes and dynamic ones coming from our Contentful CMS. All web services combined, we test about 3500 routes to make sure there is no rendering error and they actually work.
We then run pa11y on all these routes which return markup (which is most of them) to test for basic accessibility requirements (mostly correct DOM structure). This ensures we don’t break accessibility basics without realising it.
Then, we run an extensive suite of end-to-end tests powered by Cypress to test most main scenarios of our web platform. This mimicks proper user interaction, and most of them actually hit a testing database, therefore also covering the communication between the frontend and the backend API.
Finally, we run some performance auditing with Lighthouse to ensure our main landing pages (e.g. the website’s home page, the login page, and so on) are fast and respond quickly.
Once all the tests have passed, the code is deployed on staging servers only available internally, on which we do some manual smoke testing to make sure things are working well.
When we are ready to go live, we do a production build that goes through a similar testing flow, although without even remotely touching the production databases.
Due–among other things–to our banking requirements, we have to be very thorough with documentation. Every single pull-request we merge go in release notes we keep on GitHub and is linked to a product requirement on Jira. When releasing code live, we publish the release following semver conventions.
Being that verbose with contribution history makes it easier for us, but also teams relying on our work, to know what goes in each release, and when was specific code changes shipped live.
I hope this inspire you to make your deployment process and pipeline fast and efficient as well! Feel free to share your thoughts with me on Twitter. Oh, and don’t forget that we are currently hiring in Berlin and Barcelona!
]]>At N26, we recently discovered a nifty little bug which likely had been around for a few days unnoticed: entering an initial white space in the IBAN field when performing a transfer would cause a JavaScript error. Not ideal I hear you say, and you’re right. In a typical client-side application, this would cause the entire page to fail Because JavaScript™.
What happens in our case is that we immediately reload the page without loading any JavaScript. At this stage, the user is informed they have been redirected to the “basic version”, and are free to continue using it or to go back to the interactive version.
So how does this thing work under the hood? Let’s start with the obvious: the app needs to run seamlessly without JavaScript. That’s one thing for sure.
Then, we need:
componentDidCatch
from React, but it could also be using window.onerror
or something similar.A very simple implementation with React might look like this:
class Root extends React.Component {
componentDidCatch(error) {
const { href, search, hash } = window.location
const query = search ? search + '&noscript=1' : '?noscript=1'
window.location.href = href + query + hash
// Feel free to log `error` in your error tracker as well.
}
}
Server-side setup is very project specific and tends to be quite complex so it is difficult to provide an adequat code example. Basically your implementation needs to check the query to figure whether bundles should be rendered/loaded or omitted.
In our case, it looks a little bit like this:
Object.keys(webpackBundles).map(bundleName => (
<script src={webpackBundles[bundleName].js} key={bundleName} defer />
))
While the user technically doesn’t have to know they have been redirected to a lite version, it might be more transparent and less confusing to tell them. In our case, we render a fixed message at the bottom of the screen with a link to reload the page with JavaScript enabled.
There has been an interesting discussion on Twitter around the wording. Something along these lines should work:
Something went wrong and we switched you to the basic version. You can continue browsing or switch back to the full version.
… with a link on the last part of this sentence linking to the same page but without the query parameter.
One would call that “progressive enhancement” but I’d rather talk about “graceful degradation” here, because this is more of a safe check than anything else.
In all honesty, we don’t want to encourage people to use our lite version. It’s there for recovery reasons:
That’s pretty much it. I hope you like this idea and you’ll consider making your apps working without JavaScript!
]]>In this article, we’ll see how we built that discreet mode and why it is worth considering adding one to your application as well.
Our goal at N26 is to make users comfortable dealing with their money. We go to great lengths to enable people to perform boring banking actions in a simple and efficient way, be it on mobile or desktop.
This certainly comes with its own challenges. Money being the sensitive topic it is, many people feel uncomfortable talking about it, especially their own. This is how the idea of having a discreet mode came up.
A simple toggle to mask all sensitive user information, mainly amounts and account balance. Checking your account in the public transports or in a shared open space? We got you covered.
An interesting side effect we didn’t anticipate from this feature is that some people use this mode not to make sure that no one sees it but rather to avoid being confronted to their own account balance every time they use the app.
Under the hood, this is not dramatically complex: there is a control which dispatches a Redux action to define whether the discreet mode is enabled or not. This setting is saved in a cookie, but eventually will make its way to our database once/if the native N26 apps get to implement the discreet mode.
Between the account balance, and each transaction total, we display a lot of amounts. This means we need a centralised way to display or mask an amount based on the state of that preference. Our Amount
React component connects to the Redux store and applies different styles based on the preference status.
We want to mask the amount on the screen while making it obvious that some content is being masked. We don’t want to remove the content from the document entirely. To do so, we decided to use the blur
CSS filter.
.amount {
filter: blur(10px);
}
We have to consider the case where CSS filters are not supported (such as on Internet Explorer). There are a few ways to work around this:
opacity
, while not ideal because it becomes unclear some content has been masked..amount {
opacity: 0;
}
@supports (filter: blur(10px)) {
.amount {
opacity: 1;
filter: blur(10px);
}
}
On top of that, we add some CSS transitions so the switch between the two modes is smooth.
For now, we only use the discreet mode to hide amounts. That being said, we are considering further applications of this settings, such as hiding identifying information, displayed digits of credit card numbers, card tokens and more.
We are also investigating faster ways to toggle this mode: either by long-tapping/double-clicking a discreet piece of information, or with a custom keyboard shortcut.
If you have ideas on how to improve this feature, be sure to get in touch with me on Twitter!
]]>It turns out implementing an option for users to disable animations across the board is surprisingly easy. This is what we’ve done at N26 as part of the rewriting of the web application. Here’s how we did it, and how you could too.
The core concept behind this technique is rather simple, only the implementation differs based on the tech stack.
Here is how it works: an option to toggle a flag exists somewhere on the user’s settings. Under the hood, this flag changes a CSS custom property to 0
(disabled) or 1
(enabled). All animations and transitions’ duration and delay are multiplied by this flag using calc(..)
; when disabled, the operation will result in 0, effectively disabling the animation/transation.
Last but not least, we can read the system preference through the prefers-reduced-motion
media query when supported to automatically turn off this flag. CSS-Tricks has a fantastic article about reduced motion, in case you haven’t read it yet.
The very first thing we need to do is to define the CSS custom property at the root of the document. We called it --duration
, but feel free to pick another name.
:root {
--duration: 1;
}
For this technique to work, all animations and transitions need to be authored in a specific way. The trick is to multiply the desired value by the value of the flag through the calc(..)
function.
For the sake of the argument, consider the following declaration:
.foobar {
transition: transform 250ms;
}
We need to rewrite it like this:
.foobar {
transition: transform calc(var(--duration) * 250ms);
}
When the --duration
CSS custom property is set to 1
, the duration gets resolved to 250ms
, otherwise to 0ms
. This works the same for animations.
Because this is a strictly visual concern, we don’t save this option in our database. We keep that as a cookie on the browser level. We could have used localStorage
all the same by the way, which will do for the sake of simplicity. Our application is in React, but here is a how it could be written in plain old JavaScript:
document
.querySelector('#reduced-motion')
.addEventListener('change', function (event) {
const reducedMotion = event.target.checked
saveReducedMotionOption(reducedMotion)
updateReducedMotionFlag(reducedMotion)
})
function saveReducedMotionOption(value) {
localStorage.setItem('reducedMotion', value)
}
function updateReducedMotionFlag(value) {
// `true` (reduced) should be `0`, `false` should be `1`.
const flag = Number(!value)
document.documentElement.style.setProperty('--duration', flag)
}
On page load, we need to check the stored value and update the --duration
custom property accordingly.
document.addEventListener('DOMContentLoaded', function (event) {
const reducedMotion = Boolean(localStorage.getItem('reducedMotion'))
updateReducedMotionFlag(reducedMotion)
updateReducedMotionCheckbox(reducedMotion)
})
function updateReducedMotionCheckbox(value) {
document.querySelector('#reduced-motion').checked = !!value
}
At this point, we should have a working reduced-motion mode that users can toggle at their will. Now an extra nice thing we can do is detect if the user has enabled the reduced motion mode on their operating system already. Not all OS have one. Here is how it looks on macOS for instance:
In theory, this hint is being passed down to the browser (if supported) so it can be detected through a media query. The support for the reduced-motion media query is rather scarce so far, but it will get better eventually.
If we can detect the reduced mode, we can turn on the flag automatically and disable the checkbox. The first part can be done in CSS (or in JavaScript directly, up to you):
@media (prefers-reduced-motion: reduce) {
:root {
--duration: 0;
}
}
The second part will need a little bit of JavaScript.
document.addEventListener('DOMContentLoaded', function (event) {
const checkbox = document.querySelector('#reduced-motion')
const query = '(prefers-reduced-motion: reduce)'
const hasOSReducedMotion = window.matchMedia(query).matches
if (hasOSReducedMotion) {
checkbox.checked = true
checkbox.disabled = true
}
})
At N26, we even changed the copy to explain why the setting is checked but disabled:
From there, we can use this reduced motion mode for more than just disabling animations and transitions. For instance, we swap all looping GIFs with static images. We keep videos as long as they need user activation (which they should anyway).
One thing to be careful of however is not to remove important interactions such as hover / focus states. This “lite mode” is really about reduced motion on screen, but it doesn’t mean we abandon the concept of visual states.
I hope you liked this article. You can play with a small demo on CodePen.
]]>If you’re unaware what Git is, I wrote Git, the practical very basis on my brother’s blog where I explain the baby steps in version control. Check it out.
I quickly realised there is no way to be comfortable with command-line Git in the default OS terminal. On macOS, I recommend installing iTerm2 and pimping it to display the branch name as part of the prompt. Also, colors. I mean, look at that beauty:
The command I type the most has to be git status
, and given how annoying that word can be, I have git s
for short. The other thing that’s very important, especially when rebasing is to be able to see what the history looks like.
There is git log
but that’s a very blend display of the past commits, not too mention unbearable to read. Because I like my Git logs to reflect what really happened, I have a git lg
that’s short for git log --pretty=oneline --abbrev-commit --graph --decorate
. I am not typing this by hand.
This creates a nice graph with the commits id, message, branch name, etc. Like this:
To quickly jump between branches, I created a few aliases. At N26, the master
branch is the protected release branch, and develop
is the main one—also protected. Everything goes through pull-requests against the main branch.
I aliased git checkout
as git co
and git branch
as git br
:
$ git br -D feature/my-old-feature
$ git co -b feature/my-new-feature
To make it easier to move to the master
and develop
branches (as it can happen quite a lot, especially develop
), I created the git com
and git cod
aliases respectively.
The basics of Git are adding some files to the index, committing the index in history, then pushing the history diff to the remote. Also known as “add-commit-push”.
I didn’t alias the add
command because it’s short enough that an alias is not necessarily going to bring me any value. I could alias to git a
but at this stage it would be more annoying to deal with muscle memory than typing these two extra characters. I did alias git commit -m
into git cm
though.
$ git add .
$ git cm "Replace a regular expression with a split in the forwarder"
$ git push
When it comes to pushing, I like to avoid having to type the name of the remote (usually origin
) and the name of the branch. Problem is Git 1.* uses matching
as a default configuration for the push
command without arguments. This pushes all branches using the same name locally and on the upstream repository.
Because this is a terrible default value (which has been changed in Git 2.* for safety reasons), I updated the push configuration in my .gitconfig
(and made all my coworkers do the same):
# See: https://git-scm.com/docs/git-config#git-config-pushdefault
[push]
default = current
Git’s interface to undo things is unbearable in my opinion. git reset --soft HEAD^
, what the hell is that? So let’s see how I make undoing/redoing things easy.
To undo a commit entirely, I created a git undo
alias (short for git reset --soft HEAD^
) which deletes the last commit from history but keeps the changes in the index in case I want to do something with them.
To move things out of the index (the opposite of git add
), I have git wait
(for git reset HEAD
). And to remove things from the index entirely, I aliased git checkout .
into git abort
. I also had it under git nope
for a while. Not sure why I ever changed though, git nope
is gold.
So let’s say I realised my last commit was complete poppycock and I want to undo all of it and never speak of it ever again:
$ git undo # This undoes the last commit
$ git wait # This moves staged files out of the index
$ git abort # This cancels anything in the index
Updating a branch with the main one is done through fetching and rebasing with the origin (or merging but that’s not my thing). I didn’t alias git fetch
, but I did create git rod
for git rebase origin/develop
—mostly because I never remember if it should be a space or a slash.
$ git fetch && git rod # Boom, up to date
Updating develop
with its remote counterpart is done through git pur
(or git purr
for when I feel particularly kitty) for git pull --rebase
. The --rebase
flag unsprisingly rebases the current branch on top of the upstream branch after fetching. This avoids a merge commit whenever I get up to date with the remote branch.
When working on a branch, I commit frequently and tend to rewrite my commits many times. The goal is that once the feature is done, the branch history should be clean, helpful and explicit. Someone could start reviewing my PR by checking the list of commits and have a pretty good idea of what’s happening before even looking at the code.
To achieve that, I rebase a lot. I know a lot of people don’t like rebasing, and that’s a shame. Rebase is an outstanding tool to make sure the history of the branch you work on is meaningful. I don’t want to open that can of worms, but if you’d like my take on rebasing vs merging: rebase feature branches until they are clean, merge them into main branches. Been running like this for years including on projects with multiple developers and it’s been great.
Anyway, the point is: I do a lot of interactive rebases. My usual workflow looks like this: do a bit of work, do a commit, realise I forgot something therefore update the history (no “fix” commit with me). Eventually push the history onto the remote.
If the commit I want to update is the very last one in history, that’s rather easy: there is git amend
(short for git commit --amend --no-edit
). This simply adds what’s in my index to the last commit, without even asking me if I want to change the message.
$ git add path/to/file/i/updated.js
$ git amend
If the commit is further away in the history of the branch though, I need something more powerful. Usually, I rebase n
commits in the past. Git opens a Visual Studio Code tab/window (yep) to ask me what to do with all them commits. I edit and save this file to continue the rebase, until I’m done. Let’s unfold this.
The command to rebase n
commits is git rebase -i HEAD~n
but seriously, who has time for that? I created a git rb
alias that accepts a number argument. Here it is:
rb = "!sh -c \"git rebase -i HEAD~$1\" -"
I can use it like this:
$ git rb 2
I’m not a fan a Vim, so I made Visual Studio Code my editor for Git. You can do that by updating your .gitconfig
like so (provided code
is in your PATH):
[core]
editor = code -w
After running the git rb
command, a Visual Studio Code tab gets open with content like this:
pick a22f893d3 Inline outputPath and chunkOutputPath in the client-side configuration
pick 5b861eb7f Add process.env.STATS_MODE to configure stats option
# Rebase 2ec919432..5b861eb7f onto 2ec919432 (2 commands)
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit’s log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
I can change the pick
keyword to edit
(or the action of my choice). When I save and close this window (⌘S, ⌘W), the rebase starts and applies commit one by one, stopping on these I tagged for edition. On these, I can perform the changes I want, then add my files and do git rc
(for git rebase --continue
), until the rebase is complete. Note that I also aliases git rebase --abort
to git ra
and git rebase --skip
to git rs
for consistency.
At this stage, I might need to update the remote branch with the new history. To do so, I have a git force
alias which is a shortcut for git push --force-with-lease
. The --force-with-lease
argument is a seriously underrated option which protects all remote refs that are going to be updated by requiring their current value to be the same as the remote-tracking branch we have for them. Basically, it makes sure you’re not overriding someone else’s work.
So to sum up:
$ git rb 2
# Tag commits for edition in VSC, ⌘S, ⌘W
$ git add path/to/file/i/updated.js
$ git rc
$ git force
I have quite a few other Git tricks up my sleeve, but that will be for another article. For a complete list of my Git aliases, refer to my dotfiles repo.
Speaking of Git tricks, this is your reminde that my brother knows his shit and wrote on this very blog a 3-parts article on Git tips & tricks:
What about you, what are your Git secrets?
]]>An issue that often arises when it comes to introducing accessibility on a project is that there is either no time or no money for that. “That’s not our audience!” they say. Product owners, often by (understandable) lack of knowledge on the topic, dismiss accessibility for it being too inconvenient to implement.
At N26, I had the luck to start fresh. We had an empty code-base and a platform to build from the ground up. Being an advocate for inclusive experiences, it was out of question for me to give up on web accessibility before even starting. Recently hired in the company, I knew this was likely a battle I could lose, so I decided not to even fight it.
For the first few months, we never mentioned accessibility in plannings, and Just Did It™. We made our interfaces as inclusive as possible. We tried our best to accomodate to different usages of the web (devices, possible handicaps, sizes…). During review, we would usually point out how we made this component or user interface robust for different scenarios, including for people with disabilities.
This is how we slowly implemented in everyone’s mind —including our product owner— that web accessibility doesn’t have too be hard or longer to implement. We could just do it as we do everything else provided we’d consider it from the ground up. And this is how we made it a non-functional requirement. In systems engineering, a non-functional requirement (or NFR for short) is a criterion that describes how a system should be (rather than what it should do). Practically speaking, it means we now have to make things accessible for them to be considered done: accessibility is part of our baseline for quality.
Web accessibility is a complex topic. It’s one of these things where everybody is convinced it’s great and it should be done, but nobody really knows how to do so. The knowledge varies from person to person. Most developers (should) have the basics, but unless they are directly confronted to the problems they are trying to solve (blindness for instance), they often tend to omit things. We’re only humans after all.
The good thing about this, is that mistakes are easily prevented with proper tooling. At N26, we introduced two ways for us to minimize the amount of accessibility problems: via linting, and through the developer tools.
The new N26 web platform is an isomorphic React application. One of the cool things about this is that everything is written on JavaScript (don’t quote me on this statement). Including our markup, which is authored using JSX. JSX is an extension of JavaScript used to represent HTML structure in a declarative way. The reason I mention JSX is because since it’s JavaScript (even though eventually compiled), it can be linted with ESLint. And the nice thing about this, is because there is an ESLint plugin called eslint-plugin-jsx-a11y.
This plugin does static evaluation of the JSX to look for possible accessibility issues. Because it is fully static (which means it does not operate in a runtime environment, such as a browser), its effectiveness is limited. But it can help catching early mistakes such as missing alternative content for images, lack of label for fields or possible broken or inexisting keyboard support.
At N26, we run ESLint on a pre-commit hook. That is, every time a developer commits code, ESLint runs on indexed files, and aborts the commit if there is an error. This way, we can ensure committed code is free of basic mistakes. I highly recommend anyone using React to setup this plugin: it takes little time and it can make a big difference.
Linting is an excellent way to avoid mistakes early, but there is only so much it can prevent. Accessibility is a lot about the environment in which it operates, and without a runtime, there is a lot of issues that are impossible to catch.
That’s why we introduced react-aXe to our code base. It’s a React wrapper around aXe, an accessibility engine for testing HTML-based interfaces. It runs in the browser, and provides insightful information in the developer console.
Because react-aXe modifies the rendering functions from React and React DOM, it should be run in development mode only. It’s also a bit greedy in term of performance, so better make sure not to enable it in production.
import React from 'react'
import ReactDOM from 'react-dom'
// …
if (__DEV__) {
require('react-axe')(React, ReactDOM, 1000)
}
This developer tool helper does not prevent us from making mistakes of course, but it warns us early (during development) when we do, and tells us how to fix them. Not too bad for 2 lines of code!
The bigger the code base, the more developers contribute to it, and the higher are the chances that someone makes a mistake, causing a user experience to be sub-optimal at best, unusable at worst. This will happen, that’s alright. Still there are things we can do to prevent that.
Like often in software development, the idea is to do things once correctly so they don’t have to be done again (with the increasingly likely possibility of them being done wrong).
Consider an a11y 101 rule: all form fields should have an associated label, even though visually hidden. In order to make sure never to forget it, you write a thin layer around the <input>
element which accepts a label and renders it. In React for instance:
import React, { Fragment } from 'react'
import PropTypes from 'prop-types'
const Input = props => (
<Fragment>
<label htmlFor={props.id}>{props.label}</label>
<input {...props} />
</Fragment>
)
Input.propTypes = {
id: PropTypes.string.isRequired,
label: PropTypes.string.isRequired
}
This way, the label is required to render an Input
component, making sure never to introduce an unlabelled form field. Then, you could add a prop to make the label correctly invisible to assistive technologies so that no developer has to write it by hand, risking doing something incorrect such as display: none
.
The general idea is to make sure all accessibility related considerations don’t have to be repeated and are implicitly embedded in the development process. Again, this obviously won’t prevent all mistakes from happening, but over time it will dramatically reduce the number of flagged issues.
We mentioned it before: accessibility is a complex topic. It gets even more difficult when you start blurrying the line with inclusive design and consider accessibility as a way to offer anyone, regarding who they are or how they use your project, the best experience as possible.
It is because it is so complex that communication is critical to make it successful on the long run. At N26 —at least on our platform— we have a strong code review culture. Everybody contributes to it. Everybody is encouraged to ask questions, comment, suggest improvements and pinpoint possible pitfalls or mistakes. There is no one directly assigned to do reviews, it’s everyone’s job.
On top of the obvious fact that reviewing code carefully helps preventing mistakes, having everyone chiming in encourages communication across contributors and sparks discussions that might otherwise not have been had. As a result, people tend to learn from each other and understand why things are done (or not done) the way they are.
In the current team setup, I tend to be the one with the most knowledge on accessibility and inclusivity through design. I take pull-requests as an opportunity to share my knowledge on the topic so soon enough everybody understands the state of things and can contribute to making all our user interfaces as accessible as they can be.
We also have a Markdown document on accessibility. It contains a definition of the term and what we do about it, as well as instructions around our linting and testing setup (as explained in this article). Every time a pull-request sparks an insightful discussion around the topic, we sum it up in our documentation. At the time of writing, here is the table of contents:
Accessibility is not something trivial to test. Fortunately, some brilliant people with their heart in the right place built tooling around it, such as aXe for instance. A fantastic tool to automate accessibility testing is pa11y.
Pa11y is a Node / CLI utility running HTML Code Sniffer (a library to analyse HTML code) in a headless browser (Phantom in v4, Puppeteer in v5). Like aXe, it embeds the rules from accessibility standards, and test them against given URLs. From their, it gives an extensive report with hints on how to fix.
We set up pa11y to run on deployment on all our pages, so that if there is an accessibility error, it fails with a non-zero error code and aborts the procedure. Essentially, we made accessibility mistakes first class errors, so that we don’t deploy broken code.
In order to test dynamic URLs (articles for instance), we start by retreiving them all from our CMS so that we can provide them to pa11y for testing. It makes testing slightly longer, and dependent on the CMS’ API health, but it really helps us making sure we don’t inadvertently break accessibility. I find it especially useful given we don’t actively do manual QA on keyboard navigation or screen-reader usage.
In the future, we might be able to access the Accessibility Object Model (or AOM for short) to unit-test accessibility. The Web Incubator Community Group is pushing for a proper AOM implementation. If Chromium ever gets to implement it, we’ll be able to use it through Puppeteer which opens a whole new world for testing accessibility. If you are interested in the topic, I highly recommend “Why you can’t test a screen reader (yet)!” for Rob Dodson.
As Heydon Pickering says, accessibility is not about doing more work but doing the work correctly. And it’s never truely finished. It’s something we should keep doing all the time to make our products accessible to the many.
This is hard to do. It requires expertise, and often seems like an ideal beyond reach. I hope this write-up helped you find ways to introduce an accessibility mindset to your team.
If we sum up:
alt
attribute or label
element when deploying to production.Thanks for doing the Right Thing™ and happy coding!
]]>📦 January 30th. I started the year with releasing the version 3 of a11y-dialog, nicely refining the API. I went to release a version 4 (although less interesting) on October 4th. To this day, a11y-dialog remains the open-source project I’m the most happy with. I highly encourage you to use it in your projects. And for y’all React fans, I wrote a component wrapper: react-a11y-dialog.
👩💻 May 19th. DEVit in Thessaloniki (Greece) was so much fun last year that I decided to go back in 2017. I went with my friend and co-worker Mike Smart where we conducted a workshop (our very first) on React. It had way more success than originally expected since we ended up with 45 participants! So if you’d like us to run a workshop on React, let us know.
🎤 May 20th. Back at DEVit in Thessaloniki (Greece) to talk about diversity in gender and names in our industry. It felt very good not giving a technical talk and focusing on a topic that I think really matters. Also the conference was a blast, just like last year!
🐦 June 20th. I went on a Twitter frenzy and did a 100-tweets long thread on accessibility and inclusive design. It contains tips, advice, comments, and all in all a lot of information about these topics.
📦 July 27th. I open-sourced the first web project from N26. express-simple-locale a small Express middleware to retreive the language of a user, intended to replace the more convoluted express-locale
.
🎤 September 10th. I went to Minsk (Republic of Belarus) for the first time, to give my “Clever, stop being so” talk about inclusive design for the CSS-Minsk-JS conference. It was quite interesting realising about the cultural differences between France/Germany and Belarus.
📦 September 21st. At N26, we use GitHub releases to list our changes for each release. They present a lot of benefits: they live with the code without having to be versioned, they support Markdown, they’re readable without having to clone the repo… But they are not searchable. Until now! I wrote a script to search for text within GitHub releases. I hope it helps!
👥 October. It was the month when I got the opportunity to take the role of a facilitator in my team, acting as a part-time (unofficial) scrum master (thanks to my friend and outstanding co-worker Andrea Franke). I’ve been endorsing that function since, getting more and more interested in processes, agile methodologies and how to create a safe environment for a team to work in.
📦 October 13th. After multiple failed attempts at setting up Greenkeeper at work, I decided to make my life easier and wrote a tiny Node script to check for outdated dependencies in a package.json
. Feel free to go nuts with it!
🎤 October 26th. Invited by the kind folks from Locastic to speak at their Tinel event, I gave my very first talk at a local meetup in Split, Croatia. It was a lot of fun and felt super good going back to Croatia after a few years. Also had the tuna steak of a lifetime there!
💻 November 17th. I switched from Sublime Text 3 to Visual Studio Code after my co-workers convinced me to try it out. It took me half a day on VSC to realise I wasn’t switching back. I’ve been delighted to work within this IDE since, it’s brilliant.
In September 2016, Mike Smart and I joined N26 to build the new web platform. Over the course of 2017, we have rebuilt the registration process, the Mastercard selection, all the logged-out pages (login, password reset…) and half the website. All of this runs on a unique repository (deployed across multiple servers), giving us the ability to share and reuse infrastructure and components between projects.
We have a lot of freedom to make this entreprise as good as it can be, which gives us room to experiment with a lot of interesting technologies such as React, GraphQL (with Apollo), Fela, Cypress, Prettier, Docker, Jenkins…
With more projects coming up and a lot of work to do, we have been and still are hiring for our team, currently made of 5 developers from diverse backgrounds and skillsets. We should see 3 new faces joining us during 2018, and I’m very excited to see what we can achieve with such a talented team!
I started burning out a little mid-2016. I say a little, because it was nowhere near as bad as a proper burnout. But it definitely was noticeable: I stopped writing, I slowed down with my open-source contributions, I spent less time on Twitter…
2017 has been sort of on the same track. But I think I figured out why. As much as I enjoy coding, I get to find it quite boring. We tend to solve the same problems over and over. And while some projects are exciting, all in all, we build the same thing a little different, again and again.
This is why I got really into accessibility and inclusive design. This is why I stopped giving technical talks and started talking about building interfaces with diversity in mind. This is what I want to do now, because I think at the end of the day it matters way more than the name of that function or the character used for indentation.
Therefore if you’d like me to speak at your event in 2018, feel free to get in touch. Let’s see if we can make this happen. :)
✨ Prettier is an outstanding project. Of all the technical decisions we made during this year at work, this is the one I’m the most happy with. Even though I was quite skeptical to start with. Having Prettier autoformatting our code on commit massively improved our code review process. Because there is no questioning formatting anymore, comments and questions are now focused on the logic of the code itself rather than its appearance. I highly recommend Prettier to all teams.
❗️ Along the same line, I’ve realised how good can be a well-tuned linter. We moved from standard to a custom ESLint configuration to make it play nicely with Prettier. Which allowed us to pimp it further. We added eslint-plugin-jsx-a11y to statically evaluate JSX in order to spot obvious accessibility mistakes. We disabled some rules with extensive documentation explaining why it was worth removing. Most if not all of use have linting integration within our IDE, and we lint (and run prettier) on a pre-commit hook to ensure no incorrect code reaches the repository.
module.exports = {
parser: 'babel-eslint',
extends: ['standard', 'standard-react', 'plugin:jsx-a11y/recommended'],
plugins: ['jsx-a11y'],
env: { jest: true },
rules: {
// We always want a blank line before a `return` statement. This rule
// enforces that and saves us from pinpointing this in every code review.
// Ref: https://eslint.org/docs/rules/padding-line-between-statements
'padding-line-between-statements': [
2,
{
blankLine: 'always',
prev: '*',
next: 'return',
},
],
// These rules conflict with Prettier formatting and therefore need to be
// disabled.
// Ref: https://eslint.org/docs/rules/operator-linebreak
// Ref: https://github.com/xjamundx/eslint-plugin-standard/blob/master/rules/computed-property-even-spacing.js
'operator-linebreak': 0,
'standard/computed-property-even-spacing': 0,
// PropTypes validation does improve readability and understandability of
// React components, but authoring and maintaining them everywhere is
// unrealistic.
// Ref: https://github.com/yannickcr/eslint-plugin-react/blob/master/docs/rules/prop-types.md
'react/prop-types': 0,
// This rule enforcers that `onClick` handlers come with key handlers as
// well. There are cases where this is not what we want, such as for the
// `SideTracker` higher-order component.
// Ref: https://github.com/evcohen/eslint-plugin-jsx-a11y/blob/master/docs/rules/click-events-have-key-events.md.
'jsx-a11y/click-events-have-key-events': 0,
// This rule prevents using the `autofocus` HTML attribute (`autoFocus` in
// JSX) because the W3C warns against possible accessibility issues.
// Ref: https://w3c.github.io/html/sec-forms.html#autofocusing-a-form-control-the-autofocus-attribute
// As long as we don’t abuse this and we pay attention to how we use it,
// there is no good reason not to use it.
// Ref: https://github.com/evcohen/eslint-plugin-jsx-a11y/blob/master/docs/rules/no-autofocus.md.
'jsx-a11y/no-autofocus': 0,
// By default, this rule expects all form controls to have an associated
// label with a `htmlFor` props mapped to their `id` prop *and* that their
// label wraps them entirely. This latter behaviour is undesired.
// Ref: https://github.com/evcohen/eslint-plugin-jsx-a11y/blob/master/docs/rules/label-has-for.md
'jsx-a11y/label-has-for': [2, { required: 'id' }],
},
}
✅ Tests are nice. Like, really nice. Throughout the year, I’ve been working hard on our testing setup. My goal was (still is in fact) to make tests easy to write, and enjoyable to run. We’re still not quite there yet, but I’m super happy with what we have so far. It looks more or less like this:
Having such a strong focus on testing enabled us to do 195 live releases between March and December (~9 months) without stressing about breaking something. It also allows me to tell new team members what I told the CTO when I got hired: “[I] plan on writing tests and go home on time.” I stand by this, and don’t want any of my teammate to pull extra hours, especially for something that could have been prevented with proper test coverage.
💢 On most projects, technical expertise is not the bottleneck. It does matter. It does help to have experienced developers in a team. But the idea under which having a team made of highly skilled developers is going to outperform is very misguided. Most problems coming up, especially in a fast growing company, are human-related issues (lack of communication, ego clashes, misunderstandings…). During this year, I’ve tried my best to give a safe space for my teams to enable everyone to work at their best, no matter their technical skills to begin with. I only want to keep doing that in 2018.
I know it’s a bit out of scope for the yearly retrospective, but I think it’s worth mentioning what I would like to focus on for next year. If only for me.
👭 I get my energy from enabling people to work better. Both from a technical standpoint as well as a process and communication side. I would like to keep doing that next year. I learn a lot doing so, and develop a social side of me that has long been lagging behind. It’s helpful, both for me and other people.
🌱 I should try to be a bit more healthy next year. Maybe do some sports, eat better, or at least more regularly. Clean my flat more often. All in all, get my shit together. I tend to enter phases where I just let myself go, and that’s not good. I need to work on that.
📝 I didn’t write much in 2017, and in many subtle ways, I realise I’ve missed it. Maybe I should try to write a bit more in 2018, even if it’s only small write-ups. I’d like to share more of what I do at work, because I’m very proud of the platform our team have been building. Hopefully I’ll be able to write about small parts every now and then.
I think that’s it for this year, folks. I hope you had it good yourself, and are ready for the new year!
]]>Safia Abdalla started a thread about problems encountered on the web by people with (any kind) of disability.
It’s super insightful but there are hundreds of response so I thought I’d write a TL;DR organised in categories (TL;DR which is still long…).
Note: let me take this as an opportunity to link to this introduction to web accessibility I wrote.
Someone explains their frustration about small font sizes that cannot be safely increased because it breaks the layout. A few persons —some with, some without ME / CFS— agree or express a similar statement.
Some people extend the previous statement to include poor color contrasts, such as light grey on white background for instance or text on top of image.
A colorblind person says color-coded interfaces are very hard to use (toggles, heatmaps, etc.). This statement is shared by several other people having the same kind of visually impairment.
A person says that they would really need to be able to turn an article into plain text, so they can export it to their Kindle where they can read in optimal circumstances. Unfortunately, that’s usually not that easy. Another person says they copy and paste content in another program which has better reading abilities.
A visually-impaired person has a lot of troubles with non-accessible CAPTCHAs. This limits which services they can use.
A person suffering from chronic migraines says they turn down the brightness of their screen and/or wear sunglasses to browse the web. For a person with similar symptoms, sites and apps offering a night mode are fantastic. Another person mentions migraines and how they are often completely underestimated in webdesign.
A person describing themselves as “visually challenged” says simply understanding the layout is sometimes difficult.
A person apparently using a screen-reader says long navigation menus that get read out are annoying; websites should implement a “skip-to-content” link. They also say alt text for images and captions for videos should not directly repeat the text from the page.
Someone says the lack of focus outline is a big problem for them when navigating with the keyboard, especially on links. They should not be removed (without replacement) because they “look ugly”.
Someone warns against the abuse of hovering effects and mouseover only interactions, such as opening a navigation menu.
A person with Parkinson disease explains how mouse interactions are extremely hard to perform accurately.
People with hand tremor say precise gestures such as double-clicking or tap-and-hold are difficult to perform.
Someone having a cerebral palsy shares the same problem and literally cannot use a mouse because of that; they use touch screens instead.
Someone says click/tap targets that are either too small to aim precisely, or —interestingly enough— bigger than they need to be (such as headline + excerpt instead of just headline) are sometimes hard to use.
A person explains their fingers sort of stop working after a little while of using the computer / touch screen at which point they have to rely on voice-to-text.
A person with deep pain in their elbow says the lack of keyboard support across the web is dramatic. Statement (unfortunately) shared by other people.
Someone with ADHD says they can’t focus as soon as there is a “subtle” animation always running. A lot of people suffering from ADHD to a certain level share the same opinion about animations.
Along the same lines, someone wishes they could disable GIFs.
Another person with ADHD says big walls of text can be difficult to get through. To work around this problem, they use text-to-speech. This is a common problem for a lot of people suffering from a large variety of impairments and disorders.
Similarly, another person says Wikipedia is hard to browse as pages often consist of long paragraphs where they get lost very quickly. This person also resorts to text-to-speech.
Another dyslexic person says that automatically moving scroll position really hurts readability. Anything that also removes the current text selection can be a problem.
Someone with sleep disorder says they have to run f.lux (popular light & color adjustment software) as soon as 5PM, which makes them notice a lot of contrast issues (especially on links).
A person with autism says they struggle processing audio input when they feel overloaded, making them rely on captions.
The same person explains how some design choices can cause migranes or dizziness, which is unfortunate but not as bad as possible seizures triggered by heavily animated websites.
Another person with cognitive disorder explain how autoplaying videos and moving ads can cause overload quite quickly.
A person suffering from ADHD and autism joins in about automatically playing videos. Let it be said that this is also very annoying to anyone (although not damaging).
A person with Asperger syndrom says certain types of humor are “hard to process”.
Some people —some with ADHD or PTSD, some without— share their experience about zoning out and following links like Alice and the rabbit hole.
A person suffering from epilepsy says how the lack of content warnings is a problem.
A HoH person points out that not enough videos/audios are captioned, which is a shame because they are useful to more people than just HoH/deafs.
Someone says they feel like we sometimes abuse the video media on the web and not everything has to be a video. Simple text often is just fine. It seems shared by a few people.
Don’t shame users in error messages. It can be seen as playful but also possibly condescending and off putting.
Be careful with text on images, no matter how cool it looks. It can be done correctly but it’s very hard.
Hover effects are nice, but remember that not all users have a mouse/trackpad. Make sure content can be accessible otherwise.
Pa11y is a fantastic tool to automate the basics of accessibility testing.
The hidden
attribute can be used to hide an element visually and from screen readers.
A low hanging fruit to figure out if a content flow makes sense is to disable CSS and to see if it looks meaningful.
Connected radio inputs should be gathered within a fieldset, with a legend serving as label for the group.
Be careful with infinite animations, even subtle ones. People with Attention Deficit Hyperactivity Disorder could find them distracting.
Make sure not to present paragraphs that are too long. Some users tend to get lost quickly with huge blocks of text.
Don’t disable zooming. Some people need it to comfortably read your content. Some unusual situations require users to zoom.
About 7 users out of 10 would leave a site when found difficult to use (figure from CAP16).
The autistic spectrum is very wide and a lot of people are affected. Designing straight-forward UI helps tremendously.
Non-decorative images should have alt text. It is read out by screen-readers to provide information about the image content.
If you can provide a night mode, do it. A lof of users prefer browsing in night mode, no matter their vision.
Feel free to change the default browser outline, but make sure to clearly indicate which element has the focus.
A good way to test if your controls are forgiving enough is to use your mouse/trackpad with your other hand.
Fancy layouts are tricky because they can obscure the way content flows. Content hierarchy is very important to do right.
A recent research from @captainsafia shows that non-captioned videos is one of the main accessibility problems faced on the web.
Screen readers do not have to be scary. You can get started using one by trying ChromeVox for Chrome which is very straight-forward.
Having a “skip-to-content” link at the very top of the page helps screen-reader users not to go through your entire header.
About 40 million persons worldwide are blind. Roughly 250 million persons suffer from low-vision (figures from WHO).
Performance is accessibility. People in large regions of the world do not have access to fast internet.
Proper content flow and keyboard navigation also helps power users who want to do things fast and efficiently.
Think about main call-to-actions’ position on screen. Not everybody has a long fingers thumb. Not everybody has two hands available.
Avoid justifying text, especially in large amount, as it makes it harder to read or even confusing for some people.
Use HTML landmarks to help people navigate your document (HTML5 structural elements & role
attribute).
Don’t autoplay videos, seriously. If you do, mute them. Videos auto-playing with sound go from annoying to damaging.
“CSS only” solutions—while clever—usually overlook the accessibility aspect of a feature, making them sub-optimal.
Parallax scrolling and heavy animations can cause nausea or sickness. Go easy on them or make it possible to disable them.
Parallax scrolling and heavy animations can cause nausea or sickness. Go easy on them or make it possible to disable them.
The Chrome team maintains an accessibility audit for the Chrome DevTools. Use it, it’s great.
Use clear language. Tone and even humour are important of course, but in the end, copy should be understandable by everyone.
You’ll never have a perfect experience for absolutely everybody, and that’s okay. It doesn’t have to be perfect. Do your best.
Accessibility on the web is a lot about caring about users and not being a bigot. That’s the first step, keep being a great human being! ✨
High color contrast not only helps people suffering from color-blindness but also users browser under sunlight.
As @heydonworks says, accessibility is not about doing more work. It’s about doing the right work. Ideally from the ground up.
Never convey information through color only. Colors definitely bear meaning, but they should be a secondary communication channel.
White space is free. Don’t be afraid to use it.
I like light fonts as much as the next person, but they can be extremely hard to read. Keep them for headlines and large text.
Dialog windows should be closable by clicking outside of them or pressing ESC. It saves people from aiming at the tiny cross.
A good way to ensure sufficient color contrast between two colors is this checker from @leaverou.
Don’t forget the lang
attribute on the html
element, and on any element in a different language than the rest of the document.
Not all users have to have the same experience on your website. But they should all have access to your content.
Hiding an element while keeping it accessible isn’t super straight-forward. @ffoodd_fr found a bulletproof solution.
Building an accessible product is not a one shot thing. It takes time and care along the lifetime of the project.
Highly animated content should be introduced with a warning to protect people suffering from epilepsy.
The prefers-reduced-motion
media query (if supported) comes from OS settings. Here is more info about this media query.
Decorative images should have an empty alt
attribute (alt=""
). Here is a good decision tree.
The usability of floating labels is debatable. Lack of space, confusing animation, poor contrast, only working for inputs, cropped label…
The tab sequence should be trapped within a dialog window. The inert
attribute will soon natively provides that behaviour.
Some people use screen magnifiers to browse the web. Designing for them isn’t too hard, here are good guidelines for screen magnifiers.
Video captions are not only useful for deaf/HoH but also people browsing content in loud areas or without sound (e.g. public transports).
When using “−” as in “minus”, use the −
(or −
) rather than a dash. Same for ×
in place of “x”.
JavaScript is not the enemy of accessibility. Actually some patterns can only be made truly accessible with JavaScript.
Using personas can help working with accessibility in mind, as well as encouraging QA to test for it.
Video captions also benefit non-native speakers. Despite my decent English, I watch Netflix with English captions all the time.
I don’t understand why Firefox still runs with this comically thin dotted outline. I can barely notice it.
Roughly 360 million persons worldwide suffer from hearing loss (figures from WHO).
The lack of spacing between lines of text (line-height) usually causes quite important readability issues. Easy to fix though!
It seems uncommon for users to zoom out. Which means, fonts are usually too small on the web. Don’t be afraid to go big(ger)!
Use the <main>
element to define your main content section. It should be unique and should not contain layout chunks (header/footer…).
A good way to ensure keyboard navigation is to unplug your mouse or disable your trackpad when testing.
The document outline (hierarchy of headlines) matters. It’s used by certain programs to navigate within a document. Take care of it.
When you feel like bitching about Microsoft and their browser, remember that Edge is by far the most accessible one (see browsers accessibility comparison).
A survey from 2016 shows that one person out of 10 suffers from some sort of color-blindness (could be red/green, blue/yellow or complete).
Don’t use a tabindex
value greater than 0. It messes up with the tab order and can be very confusing.
WebAIM’s hierarchy for motivation to accessibility is: Inspire → Enlighten → Reward → Require → Punish → Guilt.
Vocal UI solve a lot of issues but also introduce some. Mute (temporary or permanently) people, and people with a stammer can struggle.
PDF is quite an inaccessible format, and here is a fantastic reminder why PDF are tough to do right.
As @sundress says, web accessibility is not an edge case.
If you are getting started with VoiceOver, this is a fantastic VoiceOver cheatsheet.
All form controls should have an associated label. This is important for screen-reader users to know how to interact with a form.
CSS pseudo-elements’ content is read out loud by screen-readers so be sure it contains relevant information.
In a cross-functional team, everybody can contribute to a more accessible experience. Designers, devs, PM, QA… Everyone. :)
ARIA should not be used as a fix for poor HTML. Start with clean HTML, then enhance with ARIA if necessary.
a11y.css is a clever bookmarklet from @ffoodd_fr using CSS to detect possible accessibility problems.
In the US, 50% of people aged 75+, 25% of people aged 65-74 and 10% of people aged 21-64 suffer from some sort of disability.
Making content accessible and making sure it is should be considered during planning, not as an after-thought.
To know whether an image is decorative or not, ask yourself if content would still make sense would the image be removed.
Be careful with infinite scrolling as it can be problematic for keyboard users. Make it possible to replace it with pagination.
Video is not always the good media to convey content. Ideally, a text transcript should be provided so people can choose their way.
@Heydonworks maintains a collection of in-depth articles to build inclusive components.
Add tabindex=0
to scrollable regions so keyboard users can access them.
Icon fonts are quite bad for accessibility. Better to opt for SVG as it’s an accessible imagery format.
Provide a way to undo destructive actions so they can be (at least temporarily) reversible. Undo usually > confirm.
Many screen-reader users are running on Internet Explorer or Firefox on Windows. Some Safari on iOS.
Footnotes are not super straight-forward to implement correctly. That’s why I wrote about accessible footnotes a while ago.
Toggle buttons should have persistent label. Hat tip to @heydonworks for the example.
Some gestures like tap-and-hold or double clicking can be difficult to perform for users suffering from tremor or tendonitis.
Recent studies show that between 10 and 20% of the world population suffer from sort of disability (temporary or permanent).
Links opening in a new window should be indicated as such (obvious iconography, explanatory ::after pseudo-element…).
As @NeilMilliken says, most people with a disability weren’t born with it. As we age the likelihood increases that we’ll experience.
High contrast mode is not about design anymore but strict usability. You should aim for highest readability, not color aesthetics.
Avoid interactions that are timed based. Some people are slow. Some people take time. It should not be a stressful race to do something.
Comic Sans is actually a fantastic font face for people suffering from dyslexia. OpenDyslexic is a free open alternative.
Inserting zero-width space and invisible full-stops can make screen-readers speech nicer as shown by @simevidas.
Underlining links provides value to users failing at discerning contrast. I bet this is why it was designed this way in the first place.
This illustration from Microsoft is a good reminder that we’re all different, but we all should have access to the web equally.
Web accessibility is incredibly interesting. Don’t see it as a burden, see it as a challenge and embrace it!
That’s a hundred, I’m done for this session! Thank y’all for reading this far. Keep building cool stuff, you awesome people! 💖
When building a client-side React application, routing is usually handled with React Router. Everything works like a charm until you try to load / refresh a page whose path is not /
. Then, you hit the 404 wall. Which is just a blank page, really.
This pitfall is documented in create-react-app README. It currently suggests to use a hash router or some very clever yet unfortunate hacks.
In their docs, Netlify explains how a 404.html
file can be added so it’s served in case a non-matching URL is requested.
In theory, that works. You can create the file and Netlify will serve it. Except that there is no trace of your JavaScript bundle in this file, and you don’t know its complete file name since it’s being hashed in production (e.g. main.8626537e.js
).
Indeed, create-react-app dynamically adds the script tag to your bundle (as well as other things) to your index.html
file when building your project (through npm run build
). And as far a I know there is no way to tell it to do that on another file or to change the name of this file.
The solution ends up being super simple. Duplicate the index.html
file under 404.html
post-build. To do so, update the build
task like so:
{
"build": "react-scripts build && cp build/index.html build/404.html"
}
Both files being identical, the app will work the same on the root path or on any path that do not resolve and make Netlify redirect to 404.html
.
That’s it. ✨
PS: I suspect this would work the same on GitHub Pages according to theirs docs. If anyone can confirm, that would be super rad.
]]>“Whiteboard interview” is a term describing the practice of asking a candidate to perform a coding exercise on a whiteboard (hence the name) to judge their technical skills. The usual example is to ask an applying engineer to revert a binary tree using nothing but a pen.
While it may sound stupid, whiteboard interviews are actually quite popular including in very large corporations, and sometimes referred to as a good way to judge the technical ability for a candidate to fulfill a position.
Well, this is fucking bullshit.
Short answer: it has little to no connection to the real world and what the candidate will actually do in their job would they be hired.
Now for the long answer. I understand the idea behind the whiteboard exercise: testing the ability for a candidate to solve a problem without focusing too much on the code itself. On paper, that makes sense. In practice, it’s quite irrelevant. As the aforementioned Twitter thread shows, no developer —no matter the experience— is able to function fully without a little help from StackOverflow once in a while. Nor should they.
Secondly, it puts a hell lot of pressure on the candidate. Not all of them can handle that. Hell, I’d be terrible. You know how you hate when someone stands behind your shoulder when you’re working? Well, guess what, it’s the same fucking thing. Nobody likes that. Ever heard of impostor syndrom? Nothing like someone silently judging every move to trigger that. I know some fantastic developers who would be petrified in such a session. They would be fully adequate to do the job though, and they would friggin’ nail it.
I hear some people say “yes, but you can judge resilience to pressure”. Fuck. That. Putting pressure on employees is not a safe space and a good way to improve productivity. How about giving them the right mindset and environment so they feel empowered and willing to commit to their work?
Also, it usually puts the focus on the wrong point. Don’t ask someone to demonstrate algorithmic understanding on a whiteboard if they are going to be implementing REST APIs or CSS layers for the next two years. At least try to ask something related to what they will actually do. At the very least.
Anyway, this is not an article about why I think whiteboard interviews are a bad idea. Some people did that better than I would. I actually wanted to share an idea to improve the situation (hopefully): replacing the whiteboard challenge with a code review. It’s not a new idea, but it seems so uncommon compared to code challenges that I thought it might be worth a few lines.
I have been thinking about this quite a lot, and I found many benefits to conducting a code review in place of a technical challenge, so bare with me for a long list.
It’s an encouraging setup. Reviewing code is much less stressful than writing code. Both the candidate and the interviewer can sit side-by-side to do it. It is basically going to be a discussion, slowly going through the code and commenting things that pop out, maybe even making suggestions. The risk of a candidate under-performing due to pressure is much lower. Therefore the outcome is more likely to be representative. This is pairing to improve code, not fighting to prove who’s smarter.
It’s real-world work. Unlike coding on paper or a whiteboard, reviewing code is something one is actually likely to do on daily basis. Be it through the GitHub interface or by sitting with someone during a pair-programming session. This is a direct glimpse into how the candidate will approach this exercise, which is what they will do once hired.
Perfect to judge technical skills. There is no need to see someone code to judge their ability to write code. If you can trust someone’s technical knowledge from a Twitter timeline, you can definitely do that by watching them comment code. By skimming through a pull-request, a candidate can definitely show they know their thing (or not). Did not spot the obvious mistakes from the PR? Well, that’s worrying. Actually found a bug that silently sneaked in? Pretty impressive.
Excellent to get the full picture. Provided the pull-request is not too narrowed down, reviewing code can tell a lot about the candidate’s attention to detail and general knowledge about the stack the company works with. In the case of a frontend developer for instance, a complete feature PR could involve HTML, CSS, JavaScript, accessibility, performance, design, documentation, security, etc. A good way to see if the candidate is curious about other topics or very much focused on a specific technology.
Focused on empathy. Code review is not exclusively about code. It is also about empathy. It’s about phrasing comments in a positive, non-blaming way. It’s about focusing on the things that matter, and not necessarily nitpicking on details. It’s about sharing positive comments as well, and showing appreciation. For instance, I used to perform code reviews the way a code linter throws errors. I learnt to be more tactful.
Tells about the company. Bringing code review and knowledge sharing in the interview tells a lot about the mindset of the company. It shows code review is a thing (hint: it’s not the case everywhere), and that people actually work as a team by helping each others. It might also introduce the tech stack, the standards in place, the conventions, etc; it basically gives a good glimpse at the way code is written in the company which is definitely something the candidate is interested in.
Now it’s always the same: the content still matters. You can’t ask any candidate to review any kind of code. I think the best would be to create a pull-request specifically for that.
If hiring a senior JavaScript engineer to build an engine, you don’t want them to review CSS code, but you definitely want to test their knowledge about performance and their attention to documentation and testing. Similarly, if hiring a frontend designer, you want to make sure they know a good deal of valid, accessible HTML/CSS and have an eye for design.
Here are a few topics you could involve when hiring a frontend position:
For a more general approach, I recommend creating a pull-request that covers an entire feature, for instance the implementation of a UI component (for instance a dropdown, a slider, a media object…).
Now there are several ways to tackle this. Either you create this pull-request the way you would write and submit it for review. Or you make it contain some errors to see if the candidate would notice them.
If you go this way, you might want to include some admittedly big issues: invalid HTML, unsupported CSS with no fallback, JavaScript bug, accessibility mistake, XSS vulnerability, poorly performing code… Then you could introduce some smaller issues, like typos in documentation, lack of comment on something obscure, duplicated code, non-tested edge-case, inconsistent naming convention, etc.
If you want to test git knowledge, work on your commits. Craft a commit that leave the branch in an unstable state, one that do several things at once, one that does not respect the wording convention, and so on.
Don’t forget to add a bit of description to the pull-request like a developer would normally do. It should give the context: what does this do, why, and how.
If you receive the candidate, I’d suggest sitting side-by-side to go through the pull-request together. Just ask the candidate to comment what they spot. There is no right or wrong answer per se, it’s just a matter of seeing how they approach this exercise. Try to get the big picture.
If you perform the interview remotely, their might not be a need to do this exercise live. If the candidate has access to the repository, they can submit their review and the whole process can be done asynchronously. It’s up to you, but I would recommend doing this face-to-face or during a call, if only to make the whole thing a bit more human.
I have never had the chance to conduct an interview like this so far. I am convinced it is more relevant than whiteboard or code challenges most of the time. I shared this thought on Twitter and some people told me they have been doing this successfully for a while.
A good open-source project would be to create a solid pull-request to conduct code review interviews and put it on GitHub, at least to give an idea on how it could look like.
Anyway, I hope I convinced you as well! If you ever try this, either as a candidate or an interviewer, please tell me how it was. I’m very interested.
]]>All in all, it’s quite a big version as the script has been almost entirely rewritten. There are not much rationale behind it except that it seemed like a good time to dust everything.
Still, quite a few things changed for you, hence the major release. Let’s have a little tour.
In version 2.*, the main element was assumed to have a main
id. Not only was this highly arbitrary, but it also did not play quite well with CMS like Drupal or Wordpress. There was a long discussion about it.
From version 3, all siblings of the dialog element will be toggled (understand via the aria-hidden
attribute). Since the documentation has always recommended having the main content container and the dialog element side by side, it should not be a big deal for most projects.
If toggling siblings does not work for any reason, it is possible to pass an Element
, a NodeList
or a selector as second argument. This will define which elements should be toggled on and off when the dialog is being hidden or shown. For instance:
const el = document.querySelector('#dialog')
const dialog = new A11yDialog(el, 'body > *:not(#dialog)')
This should hopefully make CMS integrations easier.
To maintain the exact same behaviour as before, you can do:
const el = document.querySelector('#your-dialog')
const dialog = new A11yDialog(el, '#main')
.create()
methodIn version 2.5.0 was added the .destroy()
method, which essentially removed all bound listeners from dialog openers and closers (as per #52). From there, the dialog was still sort of usable, but only programmatically through the JS API.
From version 3, there is now a .create()
method in order to pair nicely with .destroy()
. It is called automatically from the constructor when instantiating a dialog so nothing should change for the most part.
This method is essentially meant to provide a counterpart to the .destroy()
method. It binds click listeners to dialog openers and closers. It can be particularly useful when adding openers and closers dynamically to the page as the .create()
re-performs a DOM query to fetch them.
// Remove click event listeners from all dialog openers and closers, and removes
// all custom event listeners from dialog events
dialog.destroy()
// Add back event listeners to all dialog openers and closers
dialog.create()
Note that it is also possible to pass the targets containers (the ones which are toggled along with the dialog element) to the .create()
method if they ever happen to change (unlikely). Otherwise, the one given on dialog instantiation will remain.
In version 2.*, the dialog element itself was firing DOM events when shown or hidden. To be honest, I have no idea why I went down the DOM events route before as this is a nightmare of compatibility.
// Version 2.*
dialogEl
.addEventListener('show', function() {
// Do something
})
.addEventListener('hide', function() {
// Do something
})
From version 3, it is now possible to register event listeners on the dialog instance itself with the .on(type, handler)
method. It is obviously possible to unregister event listeners with the .off(type, handler)
method.
// Version 3
dialog
.on('show', function() {
// Do something
})
.on('hide', function() {
// Do something
})
Note that the .destroy()
and .create()
instance also emit events.
dialog.on('destroy', removeDialogNode)
// …
dialog.off('destroy', removeDialogNode)
In version 2.*, custom (DOM) events used to pass an object to the registered callbacks. It had a target
key containing the dialog element, and when triggered from a user action (such as click), a detail
key containing the trigger element.
// Version 2.*
dialogEl.addEventListener('show', function(event) {
// event.target = dialog element
// event.detail = trigger element
})
From version 3, events pass two separate arguments to the registered listeners: the dialog element, and the trigger element (if any).
// Version 3
dialog.on('show', function(dialogEl, triggerEl) {
// …
})
aria-hidden="true"
now safe (possibly breaking)In version 2.*, omitting the aria-hidden="true"
attribute on the dialog element could cause weird issues where the .shown
property would be correctly synced with the attribute, but the rest of the lib could be buggy on the first show/hide.
From version 3, the aria-hidden
attribute will be set to true
when instantiating the dialog, and the .shown
attribute to false
. When wanting to have a dialog open by default (please don’t), simply run .show()
directly after instantiation.
This is nice little addition allowing you to chain all method calls.
dialog.on('show', doSomething).show()
As stated before, this version also comes with brand new code that I took time to heavily comment, as well as a brand new test suite (that should hopefully be much more thorough).
That’s it, and that’s already quite a lot if you want my opinion! I’d be glad to have some feedback about this if you happen to use a11y-dialog. Also, if you find any bug, please kindly report them on GitHub.
Thanks to Mike Smart and Loïc Giraudel for their insightful help.
]]>But let’s try to focus on the bright side, shall we? Last year, I did a little recap’ of the things I did that are worth mentioning. With emojis. Because everything is better with emojis. Here we go again!
♿️ February 11th. I released a11y-dialog, a lightweight and flexible accessible dialog script without any dependency. To this day, it remains one of my best open source projects in terms of usefulness. Also was the project which drove me onto the accessility path.
📝 March 1st. I finally published version 1.3 of Sass Guidelines, six months to the day after the previous version. Version 1.4 is likely to be published some time after Sass 4 sees the light. Since then, many contributors have helped making Sass Guidelines what they are today, a huge set of guidelines translated in 12 languages.
♿️ March 7th. Not even a month after a11y-dialog, I released a11y-toggle, a small script for content toggles. Slightly less popular than the first one, but still worth it!
📘 April 4th. I released Jump Start Sass, my second book (in English this time). I had the luck to co-author it with Miriam Suzanne and am super happy with the result. I believe it’s a short yet insightful piece to learn how to use Sass efficiently.
🏆 May 13th. Google made me a Google Expert in frontend development after a few weeks of going through the process. It’s quite an honor being amongst such brilliant minds!
🎤 May 20th. I gave a talk named “Local styling with CSS Modules” in Thessaloniki (Greece) at DevIt conference. What an amazing conference it was my friends, can’t wait to get back there next year!
📦 May 29th. I silently shipped a small contribution to open source with Jekyll Boilerplate, an improved fork of the initial Jekyll project to get started more quickly. If you work with Jekyll quite a bit, be sure to have a look!
😰 Summer. I started feeling overwhelmed with my work in the web industry. I went through (and still kind of am in) a phase where I didn’t want to write technical articles anymore. Where I didn’t want to spend so much time doing open source. Where I just wanted to do something else.
💶 September 1st. After a year and a half at Edenspiekermann, I decided to leave the agency world for a while to join the kind folks at N26 in order to push the mobile banking forward.
🎤 September 15th. I gave my talk about CSS Modules again in Bologna (Italy) at From The Front this time. Lovely conference once again. Too bad it was the last edition.
🇧🇪 October 1st. I’ve been back in Brussels (Belgium) for a weekend. All the beers!
🇩🇪 December 8th. I left Berlin for another German city for the first time since i moved here last year. I visited Munich for a couple of days. What a lovely city!
Last thing I didn’t mention—because that spreaded from March until now—is that I have been coding some Node.js with my mom on a personal project of hers. She had an initial PHP version last year which I rewrote entirely in Node.js to make it more flexible and structured. She took over it and maintains it (mostly) by herself since then. HOW FUCKING COOL IS THAT? ✨
Anyway, that’s still a few things for this year, but that’s without counting all the downs and hard times. All in all, 2016 was pretty crappy. Let’s hope 2017 gets better.
]]>This article is a translation from Cache-cache CSS by accessibility expert Gaël Poupard. All credits to him.
Or how to visually hide some text while keeping it accessible.
And even if I find this stupid—hiding text from some users but not others seems inherently wrong from an accessibility stand point to me—it’s a recurring need.
There are many ways of doing this, that I won’t detail here. For the past few years, I’ve been using this technique from Thierry Koblentz described on his blog. It’s by far the most comprehensive, and—to my knowledge—the only way supporting RTL text orientation.
Unfortunately it’s not without issue anymore.
The “magic trick” of this solution relies on the clip
property. It’s simple to understand and very efficient. Only downside: clip
has been deprecated by the CSS Masking Level 1 module.
No worries. This technique being quite old now, there is no surprise it’s getting obsolete. The new specification recommends using clip-path
to replace clip
. Which is not ideal, because clip-path
support is still so-so. Thus we have to keep clip
and add clip-path
as progressive enhancement.
That being said, the syntax is different. After a bit of research, Yvain Liechti suggested this short version to get the expected result:
clip-path: inset(50%);
Problem solved.
J. Renée Beach warned about the width: 1px
declaration having side effects on text rendering and therefore on its vocalisation by screen readers.
The suggested solution is both simple and logical: preventing the text from wrapping so that spaces between words are preserved.
Only one declaration does that:
white-space: nowrap;
Problem solved again.
Here is the final version I came up with:
.sr-only {
border: 0 !important;
clip: rect(1px, 1px, 1px, 1px) !important;
-webkit-clip-path: inset(50%) !important;
clip-path: inset(50%) !important;
height: 1px !important;
overflow: hidden !important;
margin: -1px !important;
padding: 0 !important;
position: absolute !important;
width: 1px !important;
white-space: nowrap !important;
}
This technique should only be used to mask text. In other words, there shouldn’t be any focusable element inside the hidden element. This could lead to annoying behaviours, like scrolling to an invisible element.
That being said, you may want to hide a focusable element itself; a common candidate are skip links (a WCAG 2.0 technique). Most of the time we hide them until they get the focus.
Bootstrap and HTML5 Boilerplate have a pretty good solution for this: another class meant to reset these properties.
Here is the adapted version:
.sr-only-focusable:focus,
.sr-only-focusable:active {
clip: auto !important;
-webkit-clip-path: none !important;
clip-path: none !important;
height: auto !important;
overflow: visible !important;
width: auto !important;
white-space: normal !important;
}
You can find it on CodePen or in this Gist. What do you think?
Seeking some testers to make sure I didn’t cause any regression, Johan Ramon met a strange bug with VoiceOver. Digging further with Sylvain Pigeard, we found out that position: static
is buggy on iOS 10 + VO when .sr-only-focusable
is focused.
As we thought we discovered a real bug, I headed up to Bootstrap in order to open an issue. But it came out that an issue was already opened, involving TalkBack too. I shared our result to contribute, then Patrick H. Lauke did an awesome (and still in progress) work to determinate and describe precisely the problem. As a result, he filled many bugs:
So. In fact, skip links don’t work with screen readers on touch devices at the time of writing. Nice.
Steve Faulkner from the Paciello Group asked to the Google Webmaster Central Help Forum directly if extra context for vision impaired users has a negative effect on search ranking?.
Short answer: nope. However visually hidden content are considered secondary, in order to prevent abuses. And that’s great.
Multiple overflow-related issues were noticed, particularly on Chrome, when hidden elements live within a parent with overflow: auto
. The problem was addressed in Orange’s Boosted framework by adding margin: -1px
to the ruleset.
margin: -1px;
]]>Recently enough, a project named You Might Not Need JS has seen the day. I have mixed opinions about it, and rather than writing a series of context-less tweets, I thought the sensible thing to do would be to write a couple of lines here.
Needless to say, this is obviously not meant as an offense to the project’s author, especially since I believe they (mostly) did a great job. More on that later.
The project which has inspired the aforementioned one is You Might Not Need jQuery, in which its author outlined ways to use plain JavaScript rather than the jQuery library for simple tasks. It was quite a hit when it came out.
What I liked with this attempt is that it showed the world that JavaScript had come a long way and was not as hard to author as when jQuery was first invented. It also had the benefit of introducing new browser APIs (.querySelectorAll
, .classList
, .matches
, bind
) which is obviously a Good Thing™.
Coming back to my initial point: I am all for teaching people not to abuse JavaScript and not to use it when it is not needed. No need to convince me that progressive enhancement is the way to go, and that relying on JavaScript for critical features is to be avoided. For that, I think Una (the project’s author) did a fantastic job.
However, I don’t believe replacing JavaScript with CSS hacks is any better. People, JavaScript is not a problem. I repeat it, because it doesn’t seem that obvious these days: JavaScript is not a problem. It has been invented for a reason. Replacing it for the sake of replacing it is not only useless, it’s also quite harmful.
CSS is not meant to handle logic and states. It has some simple mechanisms to ease styling based on states (pseudo-classes mostly), but it is not meant to control states. JavaScript is.
At the end of the day, it boils down to knowing your browser. There are some excellent examples in this project, and almost all of them are about replacing JavaScript with native HTML. A good one is the color picker:
<label for="color-picker">Select a color</label>
<input type="color" id="color-picker" />
Fantastic! No need for JavaScript if the browser supports the color
input type. Maybe only load a JS-powered color picker if it doesn’t.
Another good example is the form validation, with all the fancy HTML attributes allowing that (required
, pattern
, etc.). Indeed, no need for JavaScript client-side validation if the browser can do the heavy lifting for us.
I really appreciate this project promoting these new browser features in favor of heavy JavaScript modules, the same way You Might Not Need jQuery featured new DOM APIs instead of jQuery-dependent scripts. But I don’t think all examples are correctly picked, which brings me to my last point.
The problem with blindly banishing JavaScript from interactive components is that it often means making them inaccessible. It is a popular belief to think that JavaScript is an enemy of accessibility; that’s a fallacy.
While it is strongly encouraged to make websites work without JavaScript (because it can fail to load or execute and be blocked or disabled), it does not mean JavaScript should be avoided at all cost. It means it shouldn’t be used in a critical way.
If there is one thing I learnt while building a11y-dialog and a11y-toggle, it’s that JavaScript is necessary for interactive modules to be fully accessible for people using assistive technologies (such as a screen reader for instance).
A dialog element is not going to be accessible with CSS only. The aria-hidden
attribute needs to be toggled, the focus needs to be trapped, the escape key needs to close the dialog, and I could go on.
Maybe instead of trying to reproduce the exact same module without JavaScript by using CSS hacks, we could display the content in a way that is suited for no JavaScript behaviour. Nothing states that both JS and no-JS environments should behave the same. If a module cannot fully exist without JavaScript, don’t use it in a no-JS environment; find something else.
Be pragmatic about your approach. If something can be done in HTML exclusively, it probably means it should be done in HTML. If the lack of browser support is likely to be an issue, fix it with JavaScript.
If something needs interactivity and state handling, it is likely to be a job for JavaScript, not CSS. A CSS hack is not any better than a clean JavaScript solution.
If you want to make it work without JavaScript: go simple. Accessible content powered by clean code is better than non-accessible content made with hacks.
With that said, happy coding. 💖
]]>Still, I realised that I was doing the same thing over and over again for every new Jekyll project. It was way past time to create myself a tiny boilerplate. Which I did. Say hi to jekyll-boilerplate.
The goal behind this project was to speed up the beginning of projects using Jekyll. Meanwhile, I wanted not to be too opinionated to avoid finding myself in the exact same situation at the other end of the spectrum; and also so that other people could use this starter pack without having to change much.
I feel like I have done a pretty decent job covering what jekyll-boilerplate does in the project’s README, so feel free to have a look at it to know what’s up. In case you’re lazy, here’s a sum up:
assets
folder rather than being spreaded in their individual folders at the root of the project.jekyll-feed
) and a sitemap (jekyll-sitemap
); both running in safe mode to stay compatible with GitHub Pages.main
element, presence of a lang
attribute…).As of today, this is mostly a personal helper so I did not distribute jekyll-boilerplate in anyway, however you can definitely use it by cloning the repository and wiping out the git folder.
git clone git@github.com:KittyGiraudel/jekyll-boilerplate <your_project_name>
cd <your_project_name>
rm -rf .git
You tell me. Feel free to open an issue on the repository if you have an idea or highly disagree on a choice made in the boilerplate. I’ll be happy to discuss it!
]]>I’ve never been quite happy with the design of this blog. Let’s face it: I am no designer, and coming up with a fancy layout is not really my strong suit.
So I was thinking… hey, why not trying something different for once?
Markdown is one of my favourite things in this industry. I use it so much. For articles. For books. For sites. For mails. For personal content. It is such an amazing text format, both simple and obvious.
Last year, I have written about how I use Sublime Text as a writing environment. And now, I wanted to move my Sublime Text design in the browser. Here we are.
This site runs on Jekyll. Almost everything that is not structural (such as the sidebar, the footer, etc.) is written in Markdown. Jekyll compiles everything to static HTML. Then I use CSS to style HTML as raw Markdown.
This is not a new concept. A couple of libraries style HTML like Markdown, such as ReMarkdown or Markdown.css, and I myself made a pretty detailed CodePen about this last year.
It is surprisingly easy to do. Basically, pseudo-elements are used to display characters at specific location, such as #
before headings, or **
around <strong>
elements.
strong::before,
strong::after {
content: '**';
}
A monospace typeface (here “Source Code Pro”) is required to make the whole thing look even better, and a special care must be given to spacing and line height in order to align everything on a grid.
While most of the design is surprisingly easy to implement, there are a few things that turned out to be slightly more tricky. Here they are, and how I solved them.
To make it look like Markdown, the href
attribute of a link is being displayed with a pseudo-element, like this:
a::before {
content: '[';
}
a::after {
content: '](' attr(href) ')';
}
The problem is that some URLs are very long. Veeeery long. Sometimes, it resulted in odd and quite confusing line breaks. I managed to solve it (or rather make it less painful) by forcing line breaks at end of line anywhere in a URL, thanks to word-break: break-all
.
a::after {
content: '](' attr(href) ')';
word-wrap: break-word;
overflow-wrap: break-word;
word-break: break-all;
}
This declaration is usually to avoid because it does not respect language-specific line breaking rules and arbitrarily breaks a word when reaching the end of a line. In this scenario though, it is exactly what we want, and does not cause any readability issue because it’s limited to the link pseudo-element only.
When I launched the redesign, there was no line numbers, and I could not help thinking it really was missing. I was not sure how to implement it best and I must say my current solution is quite fragile.
Right now, the main container has an absolutely positioned pseudo-element displaying numbers through the content
property. Line breaks between numbers are made with \A
and the white-space
property. It looks like this (shortened for sanity):
.main {
overflow: hidden;
}
.main::before {
position: absolute;
left: 0;
top: 0;
bottom: 0;
white-space: pre;
text-align: right;
padding: 0 0.5em;
content: '1\A 2\A 3\A 4\A 5\A 6\A 7\A 8\A 9\A 10\A 11\A 12…';
}
Numbers go up to about 700, a magic number that I estimated would cover all of my pages in a matter of length, even the very long ones. I can see 3 problems with this approach though:
content
property itself is so long that it weights 4.31Kb, which is almost a third of the stylesheet.I tried playing with CSS counters but I could not come up with something working as nicely. If someone has a solution to make this more elegant, please tell! My bet is that we can probably remove the \A
from the content
property by relying on natural line breaking. That would shorten the whole thing a hell lot already.
I don’t over-use images on this blog, but a few articles use some. The problem with images is that I don’t control their height. Which means they kind of break the line-based flow.
I managed to solve this with a little bit of JavaScript, to scale down an image just enough to fit on a rounded amount of lines, but that’s not super nice. Unfortunately, I am not sure there is a good solution for this.
I don’t know yet if I am going to keep this design for a long time, but right now I am super happy with it. It looks very different to what I used to have, or to any other blog on the internet, for that matters. And reactions on Twitter were surprisingly very positive, so thank you for the support y’all!
If you can think of anything to improve the design, or to make it look even more like Sublime Text while still providing value, please tell me! In the meantime, happy coding. :)
]]>The project has long lived as a self-sufficient GitHub repository (gaining a bit of traction and a lot of stars in the process), but I wanted to given a nicer way for users to browse it. Hence a small Jekyll website.
The thing is, I did not want to make the GitHub repository non-usable anymore. Basically, I wanted everything to work both on GitHub and on jargon.js.org. Tricky! I eventually found a way, not without a struggle though so here are a few lines to explain the process.
SJSJ is community-driven. It means that while I take care of the repository and the technical setup, I do not write entries (anymore). Generous contributors do that. They submit a pull-request to add a new Markdown file in the repository, and voila. I wanted this process to remain as simple.
The main problem is that when contributors want to link to another entry from their content, they do something like this:
Redux is an alternative to [Flux](/glossary/FLUX.md) and used a lot together with [React](/glossary/REACT.md), but you can use it with any other view library.
When clicking such a link on GitHub, it will head to the file FLUX.md
file located in the glossary/
folder for instance. Very good. Except that I needed these links to work the same on the Jekyll website.
One source of content. Two ways of browsing it. Two URL structures. A lot of troubles.
I cannot change the way GitHub works (or can I…?), so if I want the entries to be consumable and linkable from both GitHub and Jekyll, I need to dig on the Jekyll side.
It turns out Jekyll 3 has lovely support for collections. And the nice things with collections, is that you can output pages, iterate on them and even specify the permalink you want. Neat.
I created a glossary
collection, containing all the Markdown files, outputting pages at /glossary/<path>/
:
collections:
glossary:
output: true
permalink: /glossary/:path/
A few problems there already. For starters, a collection folder has to be prefixed with an underscore (_
) in Jekyll, so the files would actually live in /_glossary/
but served over /glossary/
. Secondly, in-content links are rooting to /glossary/<path>.md
, not /glossary/<path>/
so they were broken. Bummer. There has to be a way.
The first issue is easily fixed by tweaking the permalink configuration to serve files over /_glossary/
to have a 1:1 mapping between the folder structure and the URL routing:
collections:
glossary:
output: true
permalink: /_glossary/:path/
I thought the second problem would be harder to fix, but it turns out I could simply serve entries with a URL ending in .md
. I believe under the hood all this is just URL rewriting, so it was not an issue at all.
collections:
glossary:
output: true
permalink: /_glossary/:path.md
Tada! Files are located at /_glossary/<path>.md
, served over /_glossary/<path>.md
. 1:1 mapping, site is browsable in both GitHub and Jekyll seamlessly.
Admittedly enough, this is kind of an odd use case to want content to work on both GitHub and a custom website, but I think SJSJ is a good candidate for that.
Thanks to Jekyll friendly handling of permalinks and a bit of trial and error, it turned out to be quite simple to do.
]]>The workshop lasted 2 days, with a solid 6 hours a day.
There were 9 participants, coming from pretty much all departments (except development, quite obviously): accounting, finance, design, product management, etc. Six of them were women. The participants age ranged between 20 something to 40+.
Most of them had little to no clue what HTML and CSS were about, and I assume some (if not most) of them never really opened a development-oriented text editor. After all, why would they?
Ironically enough, when it comes to teaching HTML and CSS, I don’t like to work on a website. I believe a website is a product that is already too complex to begin with. Not only is it hard to build from a technical point-of-view, but it also involves a lot of design and user experience to be done right.
Also, we are so used to browsing incredible websites on a daily basis that I believe trying to build one from scratch when knowing nothing about HTML and CSS (let alone JavaScript) can be extremely frustrating. I don’t want people to start with frustration. They will have a hard enough time to overcoming the baby steps that are necessary to write HTML and CSS.
When teaching the basics of frontend development, I like to work on cooking recipes. A cooking recipe is usually a very simple and straight-forward document that can make sense on its own, even when undesigned. A cooking recipe is enough to learn about HTML without feeling overwhelmed, and more than enough to experiment with CSS for literally hours.
So before the workshop, I asked every participant to prepare a recipe as text format: a title, a few meta data such as the preparation time or the number of portions, a list of ingredients, a list of steps to reproduce and at least an image.
Over the course of these 2 days, every participant was working on their own recipe, with their own content, and their own design, then I gathered them all into a small website that we named “ESPI Cookbook”.
I kicked off the workshop with a 15 minutes introduction on what the whole thing was about: how a website works in a few simple words, what is frontend development (and de facto what is backend as well), what are the 2 essential languages that compose it (no, none of them is JavaScript) and what we wanted to build in those 2 days.
After that, I asked the participants to create a folder for their project, containing only a single index.html
file in which they had to paste their recipe content, as plain text. Time to start.
At first, I thought I could start with the doctype, then the <html>
tag, then the <head>
and all it contains, then the <body>
tag, then the content, and so on. And then I realised it was already way too complex to start with. And unnecessary.
So I started by introducing what HTML is meant for and how to write it. Opening a tag, putting some content, closing a tag. Easy peasy. From there, they could put their title in a <h1>
, their sub-titles in <h2>
and their content in <p>
. Two interesting things there:
h1
, the second h2
, the third h3
and so on. Maybe I just went a bit too fast on what the number in the tags meant.<p>
did not seem correct to any of them.The next hour (and a half or so) was about marking up all the content from the recipe. Still no mention of the <body>
tag, let alone anything outside of it. We kept moving forward with HTML while remaining heavily focused on our content. It took a bit of time to some participants to understand where to close tags, but eventually everyone got there.
At this stage, I invited them to open the file inside Chrome (because I knew Chrome was adding all the things we did not add manually) so they could see what was going in once in the browser.
We encountered the first encoding issues (since we did not add a charset meta) with German and Japanese characters. We solved it by adding the <html>
element, the <body>
element, and a <head>
element with the charset meta tag only.
<html lang="en">
<head>
<meta charset="utf-8" />
</head>
<body>
Content…
</body>
</html>
I took this as an opportunity to introduce HTML attributes, such as lang
on the <html>
element. Retrospectively, I am not sure it was a good timing for that. Maybe it was unnecessary complexity at this stage.
This marked the end of the first half-day and the HTML part.
I did not want to start right away with the <link>
tag and how to connect a stylesheet to the document, so I started the second half day with a practical example to introduce CSS.
h1 {
color: pink;
}
Simple enough, but for someone with no clue how CSS works, there are already quite a few things going on there: a selector, a rule, a property, a value, a declaration, a motherfucking semi-colon… After a few explanations on this snippet, we actually created a stylesheet and used the <link>
tag to connect it to the HTML document.
I’ll be honest and admit at this point I found myself a bit cornered. The thing is, there is usually only one good way to use HTML (especially on something that simple). But regarding CSS, and depending on what the result should be, there are dozens or hundreds of ways to write a stylesheet. And since they had free rein on the design, well… I had no idea how to move forward.
Luckily for me, they all started applying styles to their pages. First the main title, then the sub-titles, then the paragraphs, the lists, and so on. Since I did not want to introduce hundreds of CSS properties, I suggested they check the CSS cheatsheet from OverAPI.com. I accompagnied them one by one in what they wanted to. I was actually surprised at how fast they managed to get this and style their document.
It was not without a few mistakes though. Here are a few things I noticed:
#
).Eventually, all participants managed to have their recipe styled pretty much how they wanted it. I even went further than expected with some of them, including these topics:
Beginners do not really care about syntax consistency. Sometimes they will add a space before the opening brace, sometimes not. Sometimes they will put the opening brace on the same line as the selector, sometimes on its own line. Sometimes there are spaces before or after the color of a declaration, sometimes not. Sometimes there are blank lines between rules or even between declarations, sometimes not. It does not seem to matter.
I have to say I found this very curious as being consistent seems like it would make things much easier for them to read their own code and update it. Don’t get me wrong, I am not implying everybody should lint their code; just that being consistent with whatever rules one feels comfortable with makes things simpler.
Participants did not seem bothered having to repeat styles several times. Since I did not explicitly explained how the cascade works, some of them did copy their font-family
declaration in all selectors without finding this particularly annoying. Same of colors, font-sizing and such. They all seemed to assume that it was perfectly normal having to repeat all this, and did not really consider a way to make it simpler.
<div>
and <span>
were harder to grasp than I thought they would. I introduced them as “semantically empty (inline / block) containers”, but that was not an easy concept to comprehend for every body. We eventually get there when we started naming elements with the class
attribute, but the necessity of “dumb” containers was not as straight-forward as I thought it would be.
All participants seemed happy with what they came up with in just a few hours, so there is that. On my side, I am super happy with the workshop. It was such a great teaching experience that I cannot wait to do it again.
If you have any tip or comment, please be sure to share in the comment section. In the meantime, here are a few screenshots of the work done by some participants:
Quite nice, for a just a few hours of playing with HTML and CSS, don’t you think? :)
]]>Before I get started though, don’t take this as the bible or anything. It’s just how I made my way through Twitter after a few years (almost 4 to the day as of writing this), and how I can still enjoy using it today, despite all the problems it presents. Might not suit everybody, or you could also have a different opinion, that’s totally fine.
How do you make something out of your Twitter account as a worker of the IT industry? And by this, I mean how do you grow a Twitter audience, if there is even such a thing.
I’ll start by stating the obvious, but the first thing to enjoy using Twitter is to actually use Twitter. Follow people, read what they say, react to what they say, and bring your own content. You know, nothing ground-breaking there.
If you don’t know where to start as a frontend developer, this Twitter list is full of good people. If you want more than just people, have a look at Frontend Rescue.
Try to find your limit: when you can’t really browse through your timeline anymore because there is too much noise, try unfollowing a few accounts to see if it gets better. My personal limit is 500 followed accounts. I can’t really keep up when it gets higher.
The main problem is that Twitter is fundamentally broken for new comers. It is a social network where nobody can initially read you. Let this sink for a second. When you join Twitter, you can read everybody’s words, but nobody can read yours.
Joining Twitter is like arriving at a party and talking alone in your side of the room, hoping that someone will care enough to join you. Trust me, it sucks.
Unfortunately, there is no good solution for this. The only one I know is to keep producing meaningful content hoping that people will notice, retweet your content, follow your account and so on. From my own experience, the first hundred followers is the worst part. If you can get past this stage, slowly but surely it will get better.
I know that some people reached a very decent followers count by following (literally) thousands of accounts, hoping a few of them will follow back. It does work, however you end up with an audience that has little to no interest in what you can say, because they just followed back almost mechanically, not because of your content. I’d advise not to do that.
Tweet often. Produce meaningful content. You know the saying, “if you build it, they will come.”
Tweeting links, articles, citations and such is a very fine way to build your audience and to tell the world you exist and share insightful content, but it is also fine to give your opinion. Especially on topics you feel confident enough to have one.
I personally do not follow people only for what they share, but also (and mostly) for what they think and say. Even if I don’t always agree, I like reading people’s opinion about our industry, our work, my work and whatelse.
I suppose it is important to find the good balance between giving opinions and sharing resources. Some people will like one more than the other. Bringing a bit of all on the table helps gathering a larger audience.
I have really no problem with people tweeting about their life, what they enjoy doing outside of work and anything not web industry, but I know for a fact that a lot of designers, developers and workers from our field on Twitter do not enjoy it.
If you want your Twitter account to grow, I would advise not to do too much off-topic. It’s fine once in a while, but I guess on Twitter we all follow someone for a reason. For instance, I follow Brad Frost because he tweets insightful links and content about Responsive Web Design. That said I am perfectly fine with him talking about anything else from time to time. But when it gets too much like Christian Heilmann, I am losing interest. That’s also the nice thing on Twitter: we can kind of pick what/who we want to read.
But again, some people will be completely fine with this, and some people won’t like it at all. You can’t please everybody, so it really depends on how you envision your Twitter account.
Ironically, I believe this is the kind of thing that matters less and less when your account is getting more and more attention. However when you get started and nobody knows you, it is worth spending a bit of time on your profile to show who you are, what you do and what you are using Twitter for.
Put a picture, a background image and find a short bio. I hate writing bios so I am definitely not an example here, but that can be short and sweet. I personally tend to dislike bios that are a super serious list of all the achievements and titles of a person. I don’t really care. I’d rather have a smile at a joke, or simply see what is the person interested in.
I also noticed that company / product / project accounts usually perform worse than user accounts. Probably because they are less personal, but that’s still an interesting thing to acknowledge.
You can also pin in a tweet on the top of your timeline. It has no impact on users browsing Twitter through a third party client such as Tweetdeck. But for those on the .com, they’ll see your tweet right away when visiting your profile.
The feature being fairly new, I am not entirely just about what is the best to pin up there. I chose a tweet that had some impact and could be useful for beginning developers. But a fun tweet could also work I guess. Think about what you’d want people to think when checking your page.
Last piece of advice would be to just enjoy it. It should not be a pain for you to tweet or to browse your timeline. Twitter is also so specific that I’d say it’s not for everybody. Not in the way that some people should not use it; just that I understand how some people could dislike it.
Find what works for you on both sides of Twitter: reading, and tweeting, then stick to it. Patience is what works on this social network. It takes time, but that’s part of the journey.
Enjoy it. The more you do, the more you will. :)
]]>But these brighter colors and shiny buttons are just a mask for the pile of inconsistencies and poor user experience lying underneath. The bad practices and anti-patterns you use in your service makes me sad and wonder how you can handle that much money and still be that wrong with your users.
Here are just a few things that are terrible, from the top of my head.
Online security is a tough topic, I think we can all agree on this. And password security is a serious business. Especially when you hold people’s money, credit card information and more broadly speaking so much sensitive data.
Then how comes you are so terrible (and I strongly believe this is an understatement) at protecting all this?
We have been advocating for years now that the strength of a password is a function of length and unpredictability, not character set complexity. Still, you stick to these silly password requirements such as 8 to 20 characters including at least one number or symbol (like !@#$%^) but no space (amongst others).
Let me try to clear this up a bit. For starters, there is absolutely no good reason to enforce a character limit on a password. Not one. Length is the primary criteria of a strong password. A 8-characters long password containing only latin characters is a matter of hours to a few days to brute force on a decent machine. A 12-characters long could take years and a 20+-characters long would take decades. Still, that is not a valid reason to limit the number of characters in a password to 20.
You surely know the saying: “hard to guess, easy to remember.” It turns out, we humans are very good at remembering sentences. Because they make sense. By preventing a password from being longer than 20 characters and most importantly from containing any space, you basically prevent people from using sentences. And when you prevent people from using passphrases, you make them choose something small and simple enough to be remembered, which is a dull and ridiculously easy-to-crack password.
On top of that, you also make the process of choosing / changing a password so hard and painful. Finding a password is quite annoying in itself. How do you think people feel when they have to find 2, 3 or 4 passwords in a row because none of them fit your stupid “strength” criterias?
You are absolutely not helping your users by doing this. Do you want to make your users’ password secure? Ask for 12+ characters without any character restriction. Then on your side, make sure it’s not a repeated string, number sequence or a common word that’s in all brute-forcing dictionaries.
That’s it. Regarding password entropy, this is all where it’s at. You make sure your users pick hard-to-crack passwords without standing in the way of their brain. Users are happy to be able to use whatever sentence or long word they want. Win-win.
Password is not the only thing that matters when it comes to users’ data security. There is also Two Factor Authentication (2FA). Broadly speaking, 2-factor authentication is a way to protect an account by asking for a regular password plus a code received by SMS, mail, or authentication application.
Thanks to the insightful twofactorauth.org, we can see that PayPal does support it —which is great— but only in 6 countries (USA, Canada, UK, Germany, Austria and Australia). 6 countries out of roughly 200. It’s 3%. Meanwhile, your country picker page displays proudly “We are available in 203 markets and 26 currencies.”
For a company that pernickety about security and data privacy, I find your lack of concern about providing 2-factor authentication across the board very worrying. I’d love nothing more than being able to fully secure my money with 2-steps authentication. Especially given how poor you force my password to be.
Yesterday night, I wanted to check the status of my PayPal account. After 10 minutes trying to remember my password in vain because of the aforementioned restrictions, I asked for a new one. And when I finally signed in, I got faced with a page asking to verify my account. Before anything else, let me tell you that I believe it is a good idea. I attempted to sign into my account 5 times without success, then changed my password. Making sure that I am the one messing with my own account is a good idea. I don’t have any problem with that.
The problem I had is that you wanted to send me a security code by text to my phone. Unfortunately, the registered phone number is my old French number which is not mine anymore. I’ll give you some credit and concede that I should have changed my number before. Fair enough. However you did not provide another way to verify my identity even though you have security questions and answers, as well as my email address. So at this point of the night, I was literally stuck and unable to access my account even though I already signed in successfully.
Now I guess the funny part is how I managed to solve this. I signed in with my phone, and deleted the phone number from my account. Then went back on my computer, signed out, signed in again and could finally access my dashboard. Now, how is it a good user experience? I am asking you.
Last but not least, I would like to tell you about phone numbers. For starters, phone numbers are not actual numbers. I know the name is misleading, but you cannot reasonably think that a phone number is made exclusively of numbers. There are also spaces, plus signs, parentheses, and a lot of things. So [0-9]*
is not a correct pattern for this.
Another problem of storing phone numbers as actual numbers is that leading zeros are getting removed. This is an issue. 0102030405 is a valid phone number. 102030405 is not. The leading zero matters. Again, a phone number is not an actual number. 01 02 03 04 05 is a phone number. 102030405 is not.
Now the real reason why I never changed my registered phone number to my new one is because you did not let me. Yes, it starts with 01 and not 06. Because this is a German line. Why do you even care about what kind of number this is as long as you can text me? What’s happening right now is that I literally cannot map a number to my PayPal account because you want a French mobile phone number, and the only one I got is German. How is this okay?
My dear PayPal, I love your services but your site and the way you handle my data, my privacy and my experience as a user is just terrible. I wish you’d do better.
Oh, and little bonus for the end. When I have to open the Developer Tools to remove a container of yours in order to click a button, you know you have done it wrong.
Sincerely yours.
]]>Now, this article is yet another “here is what I’ve done in 2015”. Some would argue it does not belong on this blog (or any other similar blog), but I feel like it’s important to look back and have a glance at what has been and not been done so we can focus more closely on realistic goals for the year to come.
All in all, 2015 was a very fine year, not without a few downs as well though. Here is a (emoji-powered) timeline.
🌍 January 7th. I released Sass Guidelines, one of the biggest projects of my own (currently in version 1.2). Over 15 000 opinionated words about Sass (roughly 50 pages), which have been translated in over 12 languages so far. If this project has taught me anything, it is that internationalisation is hard. But interesting. But hard.
💥 February 5th. I, with SassDoc team, released the second major version of SassDoc, my big project of 2014. It was quite a large launch for us as it involved a complete rewriting, a redesign, a new site, a logo, a lot of tests, and so on. Pretty happy with what we have so far and for good reason: the Zurb Foundation framework is using SassDoc to document all their Sass assets!
📘 February 27th. I released my very first book, CSS 3 Pratique du Design Web. It took me about half of 2014 to write, and I am very proud to see it live. Holding your very first book in your hands is a priceless feeling and hopefully it is something I will renew!
🇩🇪 March 15th. I moved from Grenoble (France) to Berlin (Germany), despite not knowing a single word of German. I learnt a few words since then but it’s still far from ideal. In any case, I learnt to love the city and its how open-minded people. Planning on staying there for a while.
💻 April 1st. I started as a frontend developer at Edenspiekermann in Berlin. Needless to say it has been a very exciting move, and working with so many inspiring people on a daily basis makes me very happy.
🇳🇱 April 2–5th. I visited Amsterdam for the first time (yup, took some holidays directly after my start at ESPI ¯\_(ツ)_/¯). Lovely city, it goes without saying, although super crowded at that time of the year. I am likely to go back there for a talk in the next months.
✒ May 12th. I signed a contract with SitePoint to co-author Jump Start Sass with Miriam Suzanne (author of Susy grid engine), an introduction book to Sass which will be released in February this year (if all goes right).
🔙 May 29th. I moved my site back to Jekyll after roughly a year on Mixture. Took it at an opportunity to freshen up the design and add a few features. Might seem like nothing, but we all know that the project we usually deal the less with is our own site. :)
📝 June 24th. I released in a joint effort with SitePoint the SitePoint Sass Reference, a platform aiming at explaining the Sass buzzwords, mostly targetted at new comers.
❓July 19th. I opened a Ask Me Anything repository and replied to over 50 questions already. Feel free to add your owns!
🏆 July 20th. I made it to the shortlist for the “Developer of the Year” Net Award. At 23, needless to say it is a huge honor. Sara Soueidan later won the well-deserved award on September 21st.
🎤 August 27–28th. I gave a talk entitled Three Years of Purging Sass at Frontend Conference in Zurich. It was a great venue, and as always, a good opportunity to meet all the nice folks from our industry.
🔥 August–September. Some time at the end of the summer, I started suffering from tendonitis on my right wrist, probably (although yet to be confirmed) due to a larger than average amount of stress at that time. To this day, I still happen to feel some pain in the arm, even though I found some tricks to leverage it.
🇹🇷 September 17–21st. I left Europe for the very first time, although didn’t go much further and visited Istanbul. Very interesting city, although probably not one I would like to live in. I have good memories though.
💔 October 4th. I ended a 4 years long relationship with my girlfriend.
✨ November 6th. I built node-legofy with my friend Valérian Galliat, a script to convert your images to mosaics of LEGO. Super fun side project.
💬 December 7th. I created the SJSJ (Simplified JavaScript Jargon) repository as an attempt to make the JavaScript buzzwords a bit less obscure to new comers. I am astonished as how welcome the project is, with already over 1300 stars on GitHub. Keep it up you people!
And that’s pretty much it I guess. Which is already a lot for a single year if you ask me! 2015 is also the year where I started pool (billards) as a serious hobby (quite antinomic some would say). Again, all in all, it was quite a good year, rich in new experiences and people. Let’s hope 2016 gets similar. :)
What about you my friends, how was 2015 for you?
]]>@brad_frost Brain wave: a cheat sheet for web developers that explains, in simple terms, what npm/composer/babel/etc are/do/require/etc
— Nate Bailey, Twitter
The general idea is that the JavaScript ecosystem has gotten both complex and bloated in the last few years, to the point where it might look scary for newcomers (and even experienced developers). And this is too bad.
I gave it a night and thought it would be a good idea to have a GitHub repository acting as a glossary for most (all is likely to be impossible) JavaScript buzzwords. This repository, I named SJSJ for Simplified JavaScript Jargon.
The reason why I introduce it in a proper blog post is because I would need your help. Indeed, I would like this to become a community-driven project, as:
As of today, not even a week since the creation of the repository, there are already 66 terms listed, 40 of them being properly documented. And each day brings new kids on the block! Generous contributors are generous, I’m telling you.
The cool thing with this project is that an entry does not need to be long. Nobody has to write 700 words on a concept or a JavaScript library. It is all about explaining a word in a few sentences, the simplest way as possible.
Ideally, I would like this to follow the traces of Things Explainer from xkcd, where Randall Munroe explains complicated concepts with nothing more than the thousand most used words from the English dictionary. If we can do that to explain the JavaScript ecosystem, we can make it accessible to everybody, even beginners and this is a big deal.
So how can you contribute to this sweet little project? Well, as of today the best thing to do would be to check for to be completed entry in the README, and fill them. If you would like to document a term that is not part of the list, feel free to add one; as long as it is related to JavaScript.
There are a lot of things we can do from there. For starters, we can make sure that all existing entries are correctly documented, with no mistake, and in a way that is so simple that even somebody with little to no JavaScript background could understand it.
Then, maybe it might be interesting to add categories to these entries, such as libraries, frameworks, concepts, APIs, etc. We could have an alphabetical glossary, and a category-based one. I am not sure it is necessary, but if we hit 100+ entries, it might eventually become needed.
I would also love to see links to related reads inside each entry so that people can actually go further with a concept by browsing these links. On topic, someone suggested to add a link to a StackOverflow tag when possible. I like that idea, so there is that.
In any case, I am open to any suggestion to make this project, so feel free to ping me here or on Twitter to discuss this!
Thank you my friends, and happy coding!
]]>Here are a few tips I have found helping when struggling with tendonitises. They might not all work for everybody, but they did the trick for me.
Do not trigger the pain, under any circumstances. It is not like a cramp where forcing a bit of pain helps making it better. It makes things worse. When you feel pain, stop what you are doing.
When you are in pain and not using your tendons (see first advice), try putting some ice on the inflamed area. It’s not much, but it can reduce the pain a little. Also put some specialized oitment or green argile on the painful area and rub it (or ask someone to do it if it is to painful to do it) for a few minutes a few times a day.
Drink a lot of water, it helps hydrating the tendons (as I’ve heard) and can help in the long run. Also good for the body in general, so you know: win-win.
Be super careful not to force on your other healthy hand or you will end up with both arms blocked. When you feel like your okay arm is even starting to get tired, stop what you are doing and do something else. Seriously, make sure not to let tendonitis install on the other arm.
Take anti-inflammatory pills only if you really need it (and if your doctor prescribes some) and stop when you feel like it’s going better. Still, try to reduce the usage of your tendons to a bare minimum even under anti-inflammatory treatment (especially when under treatment, actually): they do not heal, they just make the pain disappear. If you keep forcing on your tendons in the meantime, you are making things worse.
Consider taking specialized food supplements. Cicatendon is a French brand focusing on helping getting rid of tendonitises. You can probably order it or find an equivalent in your country. You won’t notice a different within 2 days of course, but I suppose it can help in the long run.
When dealing with wrist tendonitises, it might be a good idea to wear a splint. It is annoying and quite ugly I concede, but it helps maintaining the arm and preventing it from moving too much which would trigger pain (see first advice).
If the pain is located in the arm and you notice your mouse or trackpad is causing partially responsible for this, it might be worth looking for specialized equipment allowing your arm to sit in a more comfortable (understand less painful) position. Most companies are well willing to help you with this kind of request, so be sure to talk about this with your employer.
When you finally feel better, still be careful. It can (and is likely to) come back quite quickly. Do not force as soon as you feel healed, or you will have to start over.
I hope it helps. Keep it up, and feel better! Further tips on this Twitter discussion.
]]>Words cannot even begin to describe the horror that happened on November 13th in Paris. Yet Mehdi Meklat and Badroudine Said Abdallah did something touching in their essay entitled “Le bruit des balles” (literally “The sound of bullets”). That’s why I felt the urge to translate it in English, in the best way I could, so that more people could read it.
A street, very near République. It was around 10PM. And the sound of war…
The lady was wearing a white coat. And small ballet pumps. She had some kind of bun that was getting untied as she was moving faster. She was walking in one way yet looked in the other. At this moment, she was alone in this street where gunshots were showering down. There was a characteristic resonance, that sound so sudden, so odd, bullets sliding in the air. She was walking, terrified, eyes wide open, running away from the horror she was hoping to leave behind. And then, on the other side of the street, a crowd started running, coming out of nowhere.
Everybody was screaming and trying to hide in any corner. Panic made a man stumble. He fell down, then stood up and kept on with his hellish race, the price of his life. Bullets, with no interruption. Seconds, not that long really, still this image of crowds, hysteria, fear, which will last until the end of our days.
Calm never really came back in this street. Night was getting darker and darker. People all over the place. Cold hearts. Only a man started dancing. A few hundrers of meters away from the attacks. In a neighborhood where life had fainted. While the TV was yelling terrible words, horror, war, terrorists, death. Only a man started dancing, as if it was the only thing left to do. A few steps, a rhythm, on the sound of horrors.
And the sound never stopped. Revolving lights transforming the walls of the city in gloomy paintings, the sound of police cars replacing the one from bullets, trucks hurtling and deafening remaining passers-by. Windows opening timidly, trying to see what is going on, to hear a last sound. The TV keeps going, the horror, war, death, like a fucking chorus that we will remember. It is the war of sounds, of screams echoing in this street, near République.
It is in such a moment, so unique, so terrible, that we feel a swing.
Our lives getting transformed, which will continue probably far away from here, when we will tell ourselves we have nothing left to do. “Shall we leave?”, messages are falling and networks are getting clogged up. We close the windows to get away from the way too noisy street, from the steeped in sounds that will play again for ever. We want to close our eyes. We want to wake up.
— Mehdi Meklat and Badroudine Said Abdallah in Le bruit des balles
]]>Over the months, I noticed that I started forgetting about stuff… Forgetting to reply to some emails. Forgetting to buy some stuff at the supermarket. Forgetting to do important things in the house. There is just so much going on that my brain prioritises and filters things, although not always in the correct order… Duh.
Therefore I started looking for a solution to work around this problem. I am not much of a paper-and-pen kind of person, so it was not really a solution for me to have post-it notes everywhere. After trying a few things, I finally realized that Trello could actually do the trick!
If I were to use Trello to manage my life, the first thing to do was opening a dedicated board, and creating relevant columns. I went with a 7-columns system:
I have to admit it took me a few days to get used to it and to actually think of launching Trello and adding cards to the board when needed. I also set up the Trello app on my phone so it’s easy to create a card no matter where I am or if I have connectivy (as the app works seamlessly offline as well).
Slowly but surely, it became more natural to me to add and move cards on the board to the point where whenever I have a few minutes (and want to be productive), I open Trello to see if I can cross something off the list. It could be as simple as replying to an email someone sent me a few days ago, or just reviewing a specific pull-request on GitHub so it can be merged. Most of the things on the board are very fast to perform anyway, it’s only a matter of not forgetting to do them (and doing them eventually).
It might sound silly but I also really dig using Trello for the groceries. I used to have post-it notes that I took with me whenever I went shopping. Except when I went shopping directly from work, in which case I didn’t have the list… Having a dedicated column on the board makes it super easy to add things to it during the week in order not to forget anything once at the supermarket.
For time-boxed items, I use the “Due date” feature from Trello. The coupling of the date with a color system getting more and more proheminent as the date comes closer makes it very valuable to actually achieve things before it’s too late. Could it be having an appointment, sending an invoice, or finishing this chapter the editor keeps asking for.
As my Web column usually contains a large number of cards, I like to use labels to filter them. By having one colour (label) per project, it gets easier to spot which projects need extra attention. Another use of labels could be to adopt a more Scrum-y approach, with only 3 columns: To do, Doing and Done, and then use colors to replace my current category-based column setup. You’d have a label for Web, a label for People, a label for Twitter and so on. Although given the boolean state (done or not done) of most items and the fact that we don’t care about an item once done, I don’t feel like this Agile approach would suit me well.
One good thing from Trello is also the ability to re-order cards in a column which can come in handy for prioritizing. I usually put important cards at the top (such as time-boxed ones) and cards with low priority at the very bottom. It then gets super easy to visualise important items by scanning the top cards of each column.
I have been using Trello like this for the last few months and I must say it has proven to work well for me so far. I also shared this board with my girlfriend so she can deal with stuff that implies us both. She actually has a board of her own (which has waaaaay more cards than mine) to keep track of everything she has to do.
I know it might not suit everybody. Still, I feel like it is an interesting alternate usage of a scrum tool so I thought it would be cool to write about it. What do you think? How do you manage to keep track of everything?
]]>However, regular expressions are hard to read, if not to say barely decipherable. That’s why I thought an article on the basics of regular expressions would not be such a bad idea after all. And to avoid the very theoretical approach, I feel like actually building a regular expression the hard way from the ground up would be a good way to learn.
Disclaimer! I am not an expert in regular expressions, although I guess I can make my way in most situations with them, as long as it’s not getting overly complex. If you happen to find a way to improve this code, be kind enough to explain what you would do in the comments. That would be super great. :)
In case you are not entirely sure what this is all about, allow me to put you back on track. A regular expression, often shortened as “regex” or “regexp”, is a sequence of characters that define a search pattern. Because of their usefulness, regular expressions are built-in in most programming languages. A very practical example would be a regular expression to validate an email address.
That being said it is important to point out that not all regular expression engines are the same. You might have heard of PCRE (Perl Compatible Regular Expression) or POSIX regular expressions. PCRE is the engine used in many languages including PHP, and can be thought as regex on steroids. It is the “new standard” so to say. However not all languages stick to PCRE. For instance JavaScript has a limited support of PCRE and a lot of features, such as the ability to write regular expressions on several lines using safe spaces and line-breaks, are absent.
Also, as it is forbidden to write about regular expressions without dropping some bombs, here is a famous quote to get started:
Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems. — Jamie Zawinski
Note: to play with regular expressions, I highly recommend Regexr which not only is extremely well thought, but also provides a handy reference as well as a way to save a regular expression for sharing. There is also Regex101 which is a great tool to fiddle with regular expressions.
Everything started from a tweet from Greg Whitworth about regular expressions:
This is my most illegible regex to date:
\w+\[\w+(\|\=|\*\=|\$\=|\^\=|~\=|\=)(|\")\w+(|\")](|\s+){
It does look illegible. As most regular expressions. I started discussing with Greg about what he was trying to achieve and learnt he wanted to find CSS attribute selectors in a document. It seemed like a fun challenge so I spent a few minutes on it and came up with this:
\[[a-z][a-z0-9-]*([|*$^~]?=("[^"\n]*"|'[^'\n]*'|[^"'\s\]]+)(\s+i)?)?]
In this article, we will see how to come up with such a monster, and what are the required steps to get there. But first, let’s be clear on what we want to match: attribute selectors. These are some examples of selectors we want to match:
[foo]
[foo=""]
[foo=bar]
[foo="bar baz"]
[foo='bar baz']
[foo^='bar']
, [foo$='bar']
, [foo|='bar']
, [foo~='bar']
, [foo*='bar']
[foo=bar i]
[foo="bar" i]
[foo='bar' i]
On the other hand, these are some examples of things we do not want to match:
[foo@='bar']
[foo=bar baz]
[foo='bar"]
, [foo="bar']
[foo=']
, [foo="]
[foo
[foo='bar' j]
We will also assume that selectors are correctly written, sticking to what is possible and allowed by the specifications. For instance, the following are theoretically invalid:
[42foo]
[foo_bar]
[FOO]
[-\_(ツ)_/¯]
Alright? Let’s get started slowly but surely.
Note: for sake of readability, I omitted the PCRE delimiters (/…/
) from all regular expressions in this article. We also won’t talk about flags as they are basically irrelevant to this discussion.
Let’s start easy. We want to match an attribute selector that only checks for the presence of the attribute, without checking its value, such as [href]
. To do so, we are looking for a word inside of brackets.
To match a word character, we can use the \w
meta character. This literally means:
Matches any word character (alphanumeric & underscore). Only matches low-ascii characters (no accented or non-roman characters). Equivalent to
[A-Za-z0-9_]
.
So the very first version of our regular expression to match an attribute selector would look like this:
\[\w+]
Let’s dissect it:
\[
: matches an opening square bracket. The leading backslash is a way to escape the next characted. It is needed as [
has a special meaning in regex.\w
: matches a word characted which is basically a lowercase letter, an uppercase letter, a number or an undescore.+
: matches the last token (here \w
) at least one time. Here, we want to imply that we need at least one character.]
: matches a closing square bracket. As there is no unescaped opening bracket, this one does not need to be escaped.So far so good, right? Let’s check our test list to see how our regular expression performs.
Oops, \w+
is actually not quite right! For starters, we do not want the attribute name to start with a number, and we don’t want to allow underscores either, only hyphens. Along the same lines, uppercase letters are not actually allowed, so instead of \w+
we should check for: [a-z][a-z0-9-]*
. This means a mandatory latin letter that can be (but not necessarily) followed by any number of latin letters, numbers or hyphens. This is what the star (*
) implies: from 0 to infinity. Our regex is now:
\[[a-z][a-z0-9-]*]
To be completely honest, we could actually very slightly tweak our regular expression and stop here. Think about it: what if we said that an attribute selector is an opening bracket followed by anything, and then a closing bracket? As a regular expression, that would look like this:
\[[^\]]+]
This bracket mess literally means “find an opening square bracket, followed by anything that is not a closing square bracket, followed by a closing square bracket”. To do so, it relies on a negated set that we will see more in-depth in the next section.
Broadly speaking, it is more than enough to find attribute selectors in a stylesheet but we didn’t learn much! Also, this version captures a lot of poorly formatted selectors, as well as some false-positive results as you can see in the next image. Let’s try to match a valid selector!
We now want to match raw attribute selectors as well as attribute selectors checking for the value. For now, let’s focus on something like [foo=bar]
without caring too much about modulators and quotes. Let’s put our current version here:
\[[a-z][a-z0-9-]*]
To match a value, we need to check for the presence of an equal sign (=
), then a series of at least one character that is not a closing square bracket (for now). To match anything that is not a specific character we use a negated set, written as: [^X]
where X
is the character you do not want to match (escaped if needed).
A negated set is a way to match any character that is not in the set.
So to match anything that is not a closing square bracket, it is: [^\]]
, as we’ve seen in the previous section. Our regex is now:
\[[a-z][a-z0-9-]*=[^\]]+]
Oh-ho though… Now [foo]
doesn’t match anymore! That’s because we did not make the equal + something part optional. We can do that by wrapping it in parentheses and add a question mark right after it ((..)?
). Like so:
\[[a-z][a-z0-9-]*(=[^\]]+)?]
The question mark says:
Matches 0 or 1 of the preceding token, effectively making it optional.
That’s going somewhere! Attribute selectors can involve a modulator before the equal sign to add extra validations. There can be only 0 or 1 modulator at a time, and it has to be one of: |
, *
, $
, ^
, ~
. We can make sure the modulator is valid by using a character set. To make it optional, there again we will use the question mark:
\[[a-z][a-z0-9-]*([|*$^~]?=[^\]]+)?]
Like many languages, CSS does not enforce a specific quote style. It can be either double ("
) or simple ('
). Actually most of the time, quotes can be safely omitted! It is the case for attribute values, as long as they don’t contain any specific character. It is best practice to put them anyway, but our regular expression should make sure it works for valid unquoted values as well.
So instead of matching anything but a closing square bracket, we want to match either:
"
) followed by anything that is not a double quote or a forbidden line break (using the \n
character class), then a double quote again: "[^"\n]*"
.'
) followed by anything that is not a single quote or a line break, then a single quote again: '[^'\n]*'
.\s
character class) or a closing square bracket: [^"'\s\]]+
.To achieve this, we can use the alternation operator |
:
Acts like a boolean OR. Matches the expression before or after the
|
. It can operate within a group, or on a whole expression. The patterns will be tested in order.
It gives us this pattern:
("[^"\n]*"|'[^'\n]*'|[^"'\s]+)
Which we can now incorporate in our expression:
\[[a-z][a-z0-9-]*([|*$^~]?=("[^"\n]*"|'[^'\n]*'|[^"'\s\]]+))?]
CSS Selectors Level 4 introduces a flag to attribute selectors to discard case-sensitivity. When present, this option tells the browser to match no matter whether the case is matching the requested one.
This flag (noted i
) must be present after at least 1 space right before the closing square bracket. Testing for it in our regular expression is actually super easy using \s+i
.
\[[a-z][a-z0-9-]*([|*$^~]?=("[^"\n]*"|'[^'\n]*'|[^"'\s\]]+)(\s+i)?)?]
Regular expressions are not exclusively made for matching and validating content. They are also super useful when it comes to capturing some dynamic content as part of a search pattern. For instance, let’s say we want to grab the attribute value in our regular expression.
Capturing content as part of a regular expression is made with parentheses ((..)
). This is called a capturing group:
Groups multiple tokens together and creates a capture group for extracting a substring or using a backreference.
You might be confused as we already used parentheses in our expression but not for capturing. We used them to group tokens together. This kind of behaviour is what makes the language of regular expressions difficult to grasp: it is not regular, and some characters have different meanings depending on their position or the context they are used in.
To use parentheses as a grouping feature without capturing anything, it is needed to start their content with a question mark (?
) directly followed by a colon (:
), like this: (?: … )
. This intimates the engine not to capture what is being matched inside the parentheses. We should update our expression to avoid capturing the equal part (as well as the case-sentivity flag):
\[[a-z][a-z0-9-]*(?:[|*$^~]?=("[^"\n]*"|'[^'\n]*'|[^"'\s\]]+)(?:\s+i)?)?]
As you can see, we added ?:
right after the first opening parenthese so we do not capture what is being matched. On the other hand, the second opening parenthese, after the equal sign, is capturing the attribute value. Which could be desired! Now, if we want to capture the attribute name as well, we only have to wrap the relevant part of the regex in parentheses:
\[([a-z][a-z0-9-]*)(?:[|*$^~]?=("[^"\n]*"|'[^'\n]*'|[^"'\s\]]+)(?:\s+i)?)?]
To make it easier to understand, consider this selector: [href^="#"]
. When running the previous regular expression against it, we will capture 2 things:
href
: the attribute name"#"
: the attribute valueIf we want to grab the value only, without the possible quotes, we need to move the capturing group inside the quotes. Depending on the purpose of the regular expression (validation, capture, etc.), it might be interesting or even needed to use capturing groups to grab content from the matched patterns.
That’s it! The final state of our regular expression is able to correctly match and validate a CSS attribute selector. I have run some tests on it and could not find a reasonable way to break it (as long as the selectors are sticking to what is allowed by the CSS specifications).
As you can see, it is not that hard to write a decent regular expression, especially when you take it slow and build it step by step. Do not try to rush the perfect solution right away. Start with the basic match, then enhance it to deal with more complex scenarios and edge cases.
It is worth noting that the difficulty with regular expressions is usually not to write them but to read them, and thus maintain them. Therefore, it is highly recommended to extensively unit-test code snippets relying on regular expressions. It can be a huge time-saviour when updating a regular expression to have a few dozens of tests making sure that the behaviour didn’t break.
Last but not least, Adonis mentioned in the comments a very handy tool to visualize the meaning of a regular expression in a graphical way. This tool, called Regexper manages to define an render a graph based on a given regular expression. Impressive! Here is the graph for our regex (using non-capturing groups only for the sake of simplicity):
I hope you learnt a few things anyway. And if you find a way to improve it, be sure to share in the comments!
Huge thanks to my brother Loïc for helping me making this article a valuable piece of information about regular expressions. :)
]]>In this short document, I try to describe what I feel would be a great workflow for me, using GitHub as a central point rather than having a collection of tools. Obviously, this standpoint is highly developer-centric and might not fit all teams / projects.
Given how long this article is, here is a table of contents so you can quickly jump to the section you want:
Below is a short and informal methodology on how to use GitHub as a project workflow, heavily relying on pull-requests. While it might sound scary as first, this approach actually has a lot of benefits that we’ll investigate further in the next section.
The rough idea is that at the beginning of a sprint, we create a(n empty) pull-request for all user stories. In the description of the pull-request, we write tasks in (GitHub Flavoured) Markdown using GitHub support for checkboxes. Then, affected developers commit their work to this branch, progressively checking out the tasks. Once all tasks from a pull-request have been treated, this one can be reviewed then merged.
Everybody, from the Scrum Master to the Product Owner, needs a GitHub account. It actually is only a matter of minutes, but it still needs to be done for this workflow to work correctly.
The idea is that every feature involving some development has its own pull-request opened at the beginning of the sprint. Tasks are handled as a checklist in the description of the pull-request. The good thing with this is that GitHub is clever and shows the progress of the pull-request in the list view directly.
For all stories involving development, create a branch named after the story and open a pull-request from this branch to the main one. When sticking to Gitflow conventions, the main branch is develop
, and story branches should start with feature/
(some might start with fix/
or refactor/
). Then, we usually put the number of the story first, and a slug for the story goal (e.g. feature/42-basic-teaser
).
Opening pull-requests can be done directly on GitHub, without having to clone the project locally or even having any Git knowledge whatsoever. But only when there is something to compare. It means that it is not possible to open a pull-request between two identical branches. Bummer.
To work around this issue, there are two options: either waiting for the story to be started by someone (with at least a commit) so there is actually something to compare between the feature branch and the main branch. Although that is not ideal as the idea would be to have it from the beginning of the sprint so that all stories have their own PR opened directly. A possible workaround to this issue would be to do an empty commit like so:
# Creating the branch
git checkout -b feature/42-basic-teaser
# Adding an empty commit (with a meaningful name) to make the pull-request possible
git commit --allow-empty -m "Feature 42: Basic teaser component"
The point of this commit is to initialize the branch and the feature so that a pull-request can be created on GitHub.
At this point, head onto the home of the GitHub repository and click on the big ol' green button. Then, create a pull-request from the relevant branch to the main one (automatically selected). That’s it! For more details about how to name and fill the pull-request, refer to the next sections.
Name the pull-request after the feature name, and prefix it with [WIP]
for Work In Progress. This will then be changed to [RFR]
for Ready For Review once the story is done (see Reviewing the pull-request). If it is someone’s specific job to merge pull-requests and deploy, you can also change the name for [RFM]
(for Ready For Merging) after the reviewing process so it’s clear that the feature can be safely merged.
Note: depending on your usage of GitHub labels, you can also ditch this part and use WIP
, RFR
and RFM
labels instead. I prefer saving labels for other things and stick the status in the PR name but it’s really up to you.
In the description of the story, create a list of tasks where a task is a checkbox, a short description and importantly enough, one or several persons involved in the making. From the Markdown side, it might look like this:
* [ ] Create the basic React component (@KittyGiraudel)
* [ ] Design the icons (@sharonwalsh)
* [ ] Integrate component in current page (@mattberridge)
* [ ] Clarify types of teasers with client (@moritzguth)
As long as all actors from a project are part of the GitHub organisation behind the project, everybody can edit/delete any comment, so anyone is able to add new tasks to the description if deemed necessary.
Note: GitHub Flavoured Markdown will automatically convert [ ]
into an unticked checkbox and [x]
into a ticked one. It will also remember the state of the checkbox so you can actually rely on it.
The comments on the pull-request can be used to discuss the story or specific tasks. We can safely ask questions in there, tagging relevant contributors by prefixing their GitHub username with a @
sign, include code blocks, quotations, images and pretty much whatever else we want. Also, everything is in Markdown, making it super easy to use.
Once all checkboxes from the description have been checked, the name of the pull-request can be updated to [RFR]
for Ready For Review. Ideally, the person checking the last bullet might want to ping someone to get the reviewing process started. Doing so avoid having a pull-request done but unmerged because nobody has reviewed it.
To review a pull-request, we use GitHub inline comments in the Files changed tab. In there, we can comment any line to ask for modification. Adding a line comment notifies the owner of the pull-request so that they know they have some re-working to do, and the comment shows up in the Conversation tab.
When updating a line that is the object of an inline comment, the latter disappears because it is not relevant anymore. Then, as comments get fixed, they disappear so the pull-request remains clean.
Once the review has been done, the pull-request can be merged into the main branch. If everything is fine, it should be mergeable from GitHub directly but sometimes there are potential conflicts so we need to either rebase the branch to synchronize it with the main branch or merge it manually. Anybody can do it, but the pull-request owner is probably the best person to do it.
Note: in order to keep a relevant and clean commit history, it would be wise to keep commit messages clear and meaningful. While this is not specific to this methodology, I think it is important enough to stress it.
Labels can be very helpful to add extra pieces of information to a pull-request on GitHub. They come in particularly handy as they show up in the list view, making it visible and obvious for everybody scanning through the open pull-requests.
There is no limit regarding the amount of labels a project can have. They also are associated with colors, building a little yet powerful nomenclaturing system. Labels can be something such as Design, Frontend, Backend, or even Waiting for info, Waiting for review or To be started. You name it.
On a project involving design, frontend, backend and devops teams, I would recommend having these team names as labels so each team is aware of the stories they have to be working on.
More often than not, a story is mostly for one person. Or when several actors have to get involved in a story, it usually happens one after the other (the designer does the mockup, then the frontend developer does the component, then the backend developer integrates it in the process, etc.). Because of this, it might be interesting to assign the pull-request to the relevant actor on GitHub, and change this assignment when needed.
Because GitHub is a platform for Git, it is a great tool to conserve a clean history of a project. One way to achieve this goal (if desired), would be to use milestones. To put it simply, on GitHub a milestone is a named bucket of issues/pull-requests, that can optionally have a description and a date.
Applying this to a Scrum project could mean having a milestone per sprint (named after the number of the sprint), with a due date matching the one from the end of the sprint and the goals of the sprint in the description. All pull-requests (stories) would be tagged as part of the milestone.
While not very helpful for the develop because all open pull-requests are part of the current sprint anyway, it might be interesting to have this as an history, where all pull-requests are gathered in milestones corresponding to sprints.
The fact that this workflow is heavily focused on pull-requests does not mean that GitHub issues are irrelevant. Au contraire! Issues can still be used for additional conversations, bug reports, and basically any non-feature-specific discussion.
Also depending on the relationship with the client (internal or external), issues might be the good place for them to report problems, bugs and suggestions. Again, everything is centralized on GitHub: the pull-requests remain clean and focused on features; issues are kept for all side-discussions.
That is all I have written about it so far. I would love to collect opinions and have feedback about this way of doing. Has anyone ever tried it? How does it perform? How does it scale? What are the flaws? What are the positive effects? Cheers!
]]>The idea is simple: people can ask questions about basically anything on the repository, to whom the author reply the way they feels. Once a question has been answered, the issue gets closed and that’s it.
It turns out he is not the only one with an Ask Me Anything repository; there are a whole bunch of people replying to questions about them on GitHub!
I really dig this idea. I feel it’s a great way to know more about someone or to be able to ask questions in a more robust way than Twitter, and less private (so more searchable) than emails.
That’s why I created an Ask Me Anything repository for me where you can ask me questions if you feel like it. Feel free to ask anything; code, Sass, life, hobbies, stupid, non-stupid, whatever.
I already replied to about 20 questions so far, among which you’ll learn that:
And much more! I realise this is probably something I should have done a while ago given the amount of questions I’ve been asked on Twitter / GitHub / email. Better late than never!
I also incite you to have your own repository so people can ask you questions! Let’s dig this AMA concept!
]]>The following is a guest post by David Khourshid about how he uses Sass and the 7-1 pattern to style React components. React being all over the place these days, I am very glad to have him talking about his experience here.
Chances are, as a frontend developer, you’ve heard of Facebook’s library for building user interfaces, React. Of course, an important part of building UI is styling it, as well. React strongly enforces the idea that a user interface is composed of many "reusable components with well-defined interfaces", and many CSS methodologies and architectures embrace this as well, including:
Fortunately, any of these architectures can be used for styling React components, or any components for that matter! ("Styling Components in Sass" sounded a bit too dry for an article title, though.) We will be focusing on Kitty’s own 7-1 pattern for this article, which I have used in multiple projects.
Just like with any language, writing CSS without a well-defined architecture and/or organizational pattern quickly becomes an unmaintainable mess. Christopher Chedeau, a developer at Facebook, listed the problems in his "CSS in JS" presentation:
We will explore how using proper organization and architecture in Sass can mitigate these problems, especially within the context of styling React components.
If you want to jump straight to the code, you can check the sample React component I put on GitHub.
Before we dive into how each of the above problems are solved, let’s take a look at the end result by styling a simple React datepicker component from this mock-up:
Our solution will have these characteristics:
Using the 7-1 pattern, the file organization for our datepicker component looks like this:
All of our React components are in the /components
folder, which are imported inside index.js
. Webpack is used in this example to bundle the JS (and optionally the CSS) files, which we’ll explain later.
Each component used is represented in Sass inside the /stylesheets/components
folder, which is part of the 7-1 pattern. Inside /stylesheets
, /base
and /utils
is also included -- /base
includes a simple box-sizing reset, and /utils
includes a clearfix mixin and shared constants (variables). The /layout
, /pages
, and /vendors
folders are not necessary for this project.
You’ll also notice the _all.scss
partial file in each of the folders. This file provides a way to consolidate all partials inside a file that should be exported, so that only _all.scss
needs to be imported into main.scss
:
// Inside /components/_all.scss
@import 'calendar';
@import 'date';
@import 'datepicker';
@import 'header';
@import 'month';
And finally, the main.scss
file, which imports all partial stylesheets:
.my-datepicker-component {
@import 'utils/all';
@import 'base/all';
@import 'components/all';
@import 'themes/all';
}
Yes, the imports are wrapped inside a .my-datepicker-component
block, which is the target selector of React.render(…)
in this project. This is completely optional, and just allows greater isolation for the component via increased specificity.
Each .scss
component file should only have these concerns:
If you want your components to be able to be themed externally, limit the declarations to only structural styles, such as dimensions (width/height), padding, margins, alignment, etc. Exclude styles such as colors, shadows, font rules, background rules, etc.
Here’s an example rule set for the “date” component:
.sd-date {
width: percentage(1/7);
float: left;
text-align: center;
padding: 0.5rem;
font-size: 0.75rem;
font-weight: 400;
border-radius: 0.25rem;
transition: background-color 0.25s ease-in-out;
// Variants
&.past,
&.future {
opacity: 0.5;
}
// States
&:hover {
cursor: pointer;
background-color: rgba(white, 0.3);
}
}
Just as you’d expect, everything’s neatly contained inside .sd-date
. There are quite a few magic numbers in this rule set, though, such as font-size: 0.75rem;
. I implore you to use Sass $variables
to reference these values, and Kitty provides guidelines on this.
I’m using a very thin naming system for component selectors; that is, I’m only prefixing each component with sd-
(simple-datepicker). As previously mentioned, you can use any naming system you (and your team) are most comfortable with, such as BEM.
It goes without saying that we will be referencing styles in our React components using classes. There is a very useful, framework-independent utility for conditionally assigning classes by Jed Watson called classnames, which is often used in React:
import React from 'react'
import classnames from 'classnames'
export default class CalendarDate extends React.Component {
render() {
let date = this.props.date
let classes = classnames('sd-date', {
current: date.month() === this.props.month,
future: date.month() > this.props.month,
past: date.month() < this.props.month,
})
return (
<div
className={classes}
key={date}
onClick={this.props.updateDate.bind(this, date)}
>
{date.date()}
</div>
)
}
}
// Note: CalendarDate used instead of Date, since
// Date is a native JavaScript object.
The simple convention here is that the (prefixed) component class (sd-date
in this example) is always included as the first argument in classnames(…)
. No other CSS/style-specific dependencies are necessary for styling React components.
Depending on your build system, there are a number of ways that a stylesheet can be exported and used within a project. Sass files can be compiled and bundled with Webpack (or Browserify), in which case you would require it within your index.js
file…
import React from 'react'
import Datepicker from './components/datepicker'
require('./stylesheets/main.scss')
React.render(<Datepicker />, document.querySelector('.my-datepicker-component'))
… and include the proper loader (sass-loader, in this case) in webpack.config.js
. You can also compile Sass files separately into CSS, and embed them inside the bundle using require('./stylesheets/main.css')
. For more info, check out the Webpack documentation on stylesheets.
For bundle-independent compilation, you have a few options, such as using Gulp, Grunt, or sass --watch src/stylesheets/main.scss:dist/stylesheets/main.css
. To keep dependencies to a minimum, this project uses the sass watch
command line option. Use whichever workflow you and your team are most comfortable with.
Now, let’s see how using a proper Sass architecture and organizational method solves each of the seven problems mentioned at the beginning of this article.
It’s worth mentioning (repeatedly) that CSS selectors are not variables. Selectors are “patterns that match against elements in a tree” (see the W3C specification on Selectors) and constrain declarations to the matched elements. With that said, a global selector is one that runs the risk of styling an element that it did not intend to style. These kinds of selectors are potentially hazardous, and should be avoided:
*
)div
, nav
, ul li
, .foo > span
).button
, .text-right
, .foo > .bar
)[aria-checked], [data-foo], [type]
):hover
, .foo > :checked
)There are a few ways to “namespace” a selector so that there’s very little risk of unintentional styling (not to be confused with @namespace
):
.sd-date
, .sd-calendar
)[data-sd-value]
).sd-date.past
)With the last namespacing suggestion, there is still the risk of 3rd-party styles leaking into these selectors. The simple solution is to strongly reduce your dependency on 3rd-party styles, or prefix all of your classes.
The class naming system (which can be used in conjunction with BEM, etc.) for our React components mitigates the risk of global selectors and avoids a global namespace by prefixing classes and optionally wrapping all classes inside a parent class (.my-datepicker-component
, in this case).
By doing this, the only way our selectors can possibly leak (i.e. cause collisions) is if external components have the same prefixed classes, which is highly unlikely. With Web Components, you have even greater style scope isolation with the shadow DOM, but that’s outside the scope of this article (no pun intended).
The organization of the component styles in the 7-1 pattern can be considered parallel to that of the JavaScript (React) components, in that for every React component, there exists a Sass component partial file that styles the component. All of these component styles are contained in one main.css
file. There are a few good reasons for this separation:
The only potential performance-related issue with this is that each page will include all component styles, whether they’re used or not. However, using the same file allows the browser to cache the main stylesheet, whereas an inversion-of-control scenario (e.g. require('stylesheets/components/button.css');
) is likely to cause many cache misses, since the bundled stylesheet would be different for each page.
A well-defined stylesheet architecture will only ever include styles for components that a project uses, but if you still want to be sure that there is no dead-code (unused CSS), try including uncss in your build process.
Add clean-css to your build process, or any of its related plugins, such as gulp-minify-css. Alternatively, you can specify the outputStyle
as 'compressed'
when compiling with Sass. Doing this and using GZIP will already provide a significant performance boost; shortening class names is a bit overkill and only useful at a (really) large scale.
You’re in luck -- Sass has variables for this very purpose. Lists and maps give you more flexibility in organizing shared values between components. In the 7-1 pattern, variables can be referenced in a utils/_variables.scss
file, or you can get more granular and store related variables in the base/
folder, such as base/_typography.scss
for font sizes and names, or base/_colors.scss
for brand and asset colors used in your project. Check out the Sass guidelines for more information.
This is just a fancy way of saying “not knowing when styles are being unintentionally overridden by selectors of the same specificity”. Turns out, this is rarely ever an issue when following a component-based architecture such as the 7-1 pattern. Take this example:
// In components/_overlay.scss
.my-overlay {
// … overlay styles
> .my-button {
// … overlay-specific button styles
}
}
// In components/_button.scss
.my-button {
// … button styles
}
Above, we are taking full advantage of specificity to solve our non-deterministic resolution woes. And we’re doing so by using specificity intuitively, and with no specificity hacks! We have two button selectors:
.my-button
(specificity 0 1 0).my-overlay > .my-button
(specificity 0 2 0)Since .my-overlay > .my-button
has a higher specificity, its styles will always override .my-button
styles (as desired), regardless of declaration order. Furthermore, the intent is clear: “style this button” vs. “style this button when it is inside an overlay”. Having a selector such as .my-overlay-button
might make sense to us, but CSS doesn’t understand that it’s intended for a button inside of an overlay. Specificity is really useful. Take advantage of it.
By the way, with a well-structured design system, contextual styling can (and should) be avoided. See this article by Harry Roberts on contextual styling for more information.
As a developer who understands the value of good, consistent design, you’ll probably want a component to be customizable by any developer who decides to use it. There are many ways that you can make configurable styles and themes in Sass, but the simplest is to provide an “API” of default variables in the component stylesheets:
// in base/_color.scss:
$sd-color-primary: rgb(41, 130, 217) !default;
// in the main project stylesheet
$sd-color-primary: #c0ff33; // overwrites default primary color
@import 'path/to/simple-datepicker/stylesheets/main';
Conversely, you can customize similar 3rd-party components by just styling equal (or more) specific selectors. As 3rd-party stylesheets should be loaded first, the CSS cascade works naturally to override styles to the desired ones.
// after the simple datepicker stylesheet has been imported…
// in stylesheets/components/_sd-month.scss
#my-app .sd-month {
// overriding styles
}
Personally, I wouldn’t include 3rd-party styling at all, as the more style dependencies your project includes, the more complex your project’s styling becomes, especially if they aren’t using a similar component-based architecture. If you must use 3rd-party components, make sure that they have a clean, semantic DOM structure that can be styled intuitively. Then, you can style 3rd-party components just like any other component.
React components can be styled in Sass in an efficient, flexible, and maintainable way by using a proper organizational structure, such as SMACSS and the 7-1 pattern. If you know Sass, there’s no new libraries to learn, and no extra dependencies besides React and Sass.
@rmurphey those problems can all be solved with good architecture and preprocesseors https://t.co/JqbK3SBD6d
— Una Kravets (@Una) June 9, 2015
The problems that Christopher Chedeau lists in his “CSS in JS” presentation are valid problems, albeit ones that are easily solved with a well-defined stylesheet architecture, organizational structure, and Sass (or any other preprocessor). Styling the web isn’t easy, and there are many very useful open-source Sass tools and libraries for grids, typography, breakpoints, animations, UI pattern libraries, and more to help develop stylesheets for components much more efficiently. Take advantage of these Sassy resources.
Check out the example simple React datepicker on Github for an example of how Sass can be used to style React components. Oh, and here is a CodePen for you, as a treat!
See the Pen 1e170149edee4b13737894b435b21724 by Kitty Giraudel (@KittyGiraudel) on CodePen.
]]>Over a year later (June 2014), I decided to give Mixture a go. Mixture is a static site generator as well, but it is packaged as an application with a nice interface and a couple of extra features that Jekyll does not have. The kind folks at Mixture offered me to write about the transition on their blog.
And here we are, almost a year later again, back to Jekyll, one more time. I thought it would wait for Jekyll 3 to be released but it did not. To be perfectly honest with you, I don’t see it changing anytime soon (but I might be wrong, I seem to be quite undecided regarding this).
Let me get something straight before going any further: Mixture is a terrific tool. Moreover, Neil Kinnish and Pete Nelson are great people who provide one of the best support I’ve ever seen. So Mixture definitely is an interesting piece of software.
Okay, now what did I dislike with it? I think the most annoying thing for me was to push the compiled sources to the repository instead of the actual development sources. While this seems irrelevant it actually prevented me from quickly fixing a typo directly from the GitHub interface.
Fixing anything required me to have the Mixture application installed (which is less of a problem now that I don’t work on Linux anymore), the repository cloned and up-to-date, then to make the change, compiled the sources and finally push it back to the repository. Tedious at best, highly annoying at worst.
Along the same lines, it was literally impossible for anybody to contribute in any way unless they happen to be Mixture subscribers. I will concede that it is not like hundreds of people would contribute to this blog, still some people do submit pull requests to fix typos. Also, as I often offer guest posts to people, I’d like them to be able to submit their work through a pull request as well.
So being able to push uncompiled sources to the GitHub repository and let GitHub Pages do all the compilation and deployment work for me was actually the major factor for me to leave Mixture behind and go back to Jekyll.
Since it was a going back and not actually a completely new migration, it ended up being extremely easy. Not only both generators rely on Liquid, but they also pretty much work the same. Only Jekyll relies on a specific naming convention for posts which I stuck to during this year using Mixture. So moving back took me something like 10 minutes I’d say.
The 4 next hours were spent redesigning the site (which I suck at).
Anyway, that’s done now. And I am glad to be back. Also I can’t wait for Jekyll 3. I can now update small things directly from GitHub without having to worry about the computer I’m using. And you can now fix my many typos by submitting nice pull requests! :D
Also, if you have any recommendation for the design part, please feel free to suggest. I’m not quite convinced with the current design so I’d be glad to have some feedback. :)
]]>Let’s get started with the basics: a beautiful theme for Sublime Text. If you ask me, there is nothing better than Spacegray. Spacegray not only provides a new syntax highlighting theme for the coding area, but also redefines the whole UI to change color, styles and more generally the whole look and feel.
Spacegray provides three different themes:
I’ve been running on the dark grey default theme for a while but I recently moved on to Eighties which has a browish style that is very appealing.
If there is one thing I do like with Sublime Text, it is the amount of options. If you haven’t already, open the default settings file (Sublime Text > Preferences > Settings - Default) and browse through all the available options. You’ll probably discover a thing or two.
Most options default value make sense although there are some of theme that you might want to change. Here is my own configuration file (omitting a few boring things), annotated with comments to explain each choice:
{
// Bold folder labels in the sidebar
// so they are distinguishable from regular files
"bold_folder_labels": true,
// Make the caret blink with a smooth transition
// rather than a harsh one
"caret_style": "phase",
// Draw a border around the visible part of the minimap
"draw_minimap_border": true,
// Draw all white spaces as very subtle dots
// as white spaces are very important in some cases
"draw_white_space": "all",
// EOF is kind of a convention and this option makes sure
// there is always one as soon as you save a file
"ensure_newline_at_eof_on_save": true,
// I have a terrible sight and this makes things big
"font_size": 20,
// Add extra gap on top and bottom of each line
// which is basically increasing line height
"line_padding_bottom": 8,
"line_padding_top": 8,
// Show encoding and line endings
// in the status bar on the bottom right
"show_encoding": true,
"show_line_endings": true,
// Force tab size to be equivalent to 2 spaces
"tab_size": 2,
// Make sure there are no tabs, only spaces
"translate_tabs_to_spaces": true
}
The first thing to know is that I, as most tech writers, use Markdown for basically any write up. Markdown is a terrific format for both writing (obviously) and reading, no matter whether it’s been compiled to HTML or not. Because Markdown uses text symbols to represent content hierarchy (#
for title, *
and _
for emphasis, >
for blockquotes…), it makes it very convenient to read an unprocessed Markdown file.
Sublime Text comes with a default Markdown syntax highlighter, although you might need some extra features if you happen to write a lot in the editor. For this, there is Markdown Extended. This plugin adds extra feature to the default Markdown highlighter, such as highlighting for any YAML Front Matter and sub-highlighting of fenced code blocks. This, is absolutely amazing. Basically, that allows you to have Markdown syntax highlighting in the current file and highlighting code blocks with their relevant highlighter (CSS, JS or whatever).
Last but not least tool for Markdown: Markdown Preview. This plugin is actually quite huge, but there is one thing I use it for: previewing the current file in the browser using the GitHub API (or Python-Markdown when running offline). I don’t use it that often, but sometimes it is better to actual render the file in a browser to see what it looks like (especially when it involves images).
Let’s be honest: everything is about word count when writing. How long is this article? How many pages are there in this chapter? Knowing the number of words in a document is extremely handy.
I suppose there are countless (see what I did there?) word counter plugins for Sublime Text out there; I chose WordCount. This simple plugin adds the number of words at the very left of the status bar, below the coding area.
On top of word counting, I also use WordCount to count the number of estimated pages (number of words per page is configurable, since I tend to write books inside Sublime Text. It turns out to be quite handy knowing the approximate number of pages for a given chapter in the blink of an eye.
Neat addition: when selecting a portion of content, WordCount gives the number of words in this section only instead of the whole document.
Last major Sublime Text plugin for me: Sidebar Enhancements. For the record, this plugin has been made by the same person behind WordCount, so you can say this is good stuff.
Sidebar Enhancements, as the name states, improves the sidebar projet manager by adding extra options on right click, such as a clipboard to actually copy and paste files, a move command, and much more.
Last time I had a fresh install of Sublime Text, I realized how poor the default sidebar is compared to the one provided by this excellent plugin. Highly recommended.
Paweł Grzybek, in the comments, asked for a spell checking feature. I don’t use it myself, but I know that Sublime Text does support spell checking through 2 options:
"spell_check": true,
"dictionary": "Packages/Language - English/en_US.dic"
The first one enables spell checking, and the second one is the dictionary used to perform the corrections. I am not entirely sure where to download a language dictionary file, but I suppose this is actually quite easy to find. If English is the only language you need spell checking for, then you can have direct out-of-the-box support for it.
That’s it folks, you know all my secret to writing in Sublime Text! I have been using this set up for years now and I don’t think this is going to change anytime soon. So far, so good.
Although, if you have any advice… I’m all ears! :)
]]>The following is a guest post by Gregor Adams about how he managed to re-create the Netflix logo in CSS. Gregor is kind of the rising star when it comes to CSS, so needless to say it is a great honor to have him here.
A few months ago I tested Netflix, immediately got hooked and got myself an account. I started watching a lot of series that I usually had to view elsewhere. Each episode or movie starts with the Netflix logo animation.
I immediately started thinking about implementing this in CSS. So after watching a few episodes I went over to CodePen and started to work on the logo.
My first implementation was a little dirty since I was trying a few things.
For example: I wanted to do this in pure CSS and I also wanted to be able to run the animation again when I click a button, so I had to use some magic. Luckily I always have a few tricks up my sleeve when it comes to CSS.
But let’s talk about the actual animation.
I recorded the logo and looped it in Quicktime so I could examine it in detail. I tend to do that a lot because it allows me to stop at certain frames to figure out what is actually going on.
The logo:
So these were the animation steps I needed to replicate. But there is something else about the logo that I needed to take care of: the letters are tilted to the center of the logo.
People have been asking me how I did that… I do a lot of 3d experiments, so this wasn’t that much of a difficulty to me.
I started with some basic markup for the word “Netflix”
<div class="logo">
<span>N</span>
<span>E</span>
<span>T</span>
<span>F</span>
<span>L</span>
<span>I</span>
<span>X</span>
</logo>
I made a wrapper with the class logo
and wrapped each letter in a span.
Then I rotated the letters on the y-axis and scaled them on the x-axis to retain its original width. The important part is setting a perspective
on the wrapper and defining its perspective-origin
.
// Basic letter styling
span {
font-size: 8em;
font-family: impact;
display: block;
}
// Enable a 3d space
.logo {
perspective: 1000px;
perspective-origin: 50% 0;
}
// Transform the letter
.logo span {
transform-origin: 0 0;
transform: scaleX(80) rotateY(89.5deg);
}
There are different way of doing this, like using a different perspective (e.g. 500px
), rotation-angle (e.g. 9deg
) and scale value (e.g. 0.5
) but these values turned out to work the best for my needs.
Here’s a demo on CodePen:
See the Pen netflix logo | (figure--1) by Gregor Adams (@pixelass) on CodePen.
Next I had to apply this to all the letters respecting that the middle letter is not transformed, the ones to the right are tilted in the opposite direction and the height of the letters changes.
To do this I needed to add some logic: I use Sass with the SCSS syntax to do this.
.logo {
perspective: 1000px;
perspective-origin: 50% 0;
font-size: 8em;
display: inline-flex;
span {
font-family: impact;
display: block;
$letters: 7;
@for $i from 1 through $letters {
$offset: $i - ceil($letters / 2);
$trans: if($offset > 0, -89.5deg, 89.5deg);
&:nth-child(#{$i}) {
// trans/de-form the letters
transform-origin: 50% + 50%/$offset 200%;
font-size: if($offset == 0, 0.85em, 0.9em + 0.015*pow(abs($offset), 2));
transform: if(
$offset == 0,
scale(1, 1),
scale(95.9 - abs($offset) * 10, 1)
) if($offset == 0, translatey(0%), rotatey($trans));
}
}
}
}
Here’s a demo on CodePen
See the Pen netflix logo (figure--2) by Gregor Adams (@pixelass) on CodePen.
Let’s write a function for the 3d-effect and the shadow. I paused on one frame of the video I had made before and looked at it in detail.
As you can see the 3d effect’s vanishing point is in the center while the shadow drops to the bottom right. Now we know what our function has to be able to do.
We will call this function inside keyframes so we want it to be able to handle a few values like:
We need one more argument to define the depth of the shadow or 3d-effect.
Here’s the function I am using to handle all these requirements:
/// Create a 3d-shadow in a certain direction
/// @author Gregor Adams
/// @param {Number} $depth - length of the shadow
/// @param {Unit} $color - color of the shadow
/// @param {Unit} $x - step to next shadow on the x axis
/// @param {Unit} $y - step to next shadow on the y axis
/// @param {Unit} $blur - blur of the shadow
/// @param {Color|false} $mix - optionally add a color to mix in
/// @return {List} - returns a text-shadow
@function d3($depth, $color, $x: 1px, $y: 1px, $blur: 0, $mix: false) {
$shadow: ();
@for $i from 1 through $depth {
// append to the existing shadow
@if type-of($mix) != 'color' {
$shadow: append(
$shadow,
round($i * $x) round($i * $y) $blur $color,
comma
);
} @else {
$shadow: append(
$shadow,
round($i * $x) round($i * $y) $blur mix($mix, $color, 0.3%*$i),
comma
);
}
}
@return $shadow;
}
This function might be a little hard to understand for Sass-noobs or developers/designers that only use the basic features of the language, so let me explain it in detail.
I start off with a variable I called $shadow
. It is an empty list.
$shadow: ();
I am looping from 1 through the depth. through
in Sass means that we iterate including this value.
from 0 to 5 = 0, 1, 2, 3, 4
from 0 through 5 = 0, 1, 2, 3, 4, 5
In each iteration I append a text-shadow to the list. So in the end the variable looks something like this:
$shadow: (
0 1px 0 red,
1px 2px 0 red,
2px 3px 0 red,
…
);
… and I use it like this:
text-shadow: d3(5, red, [$x], [$y], [$blur], [$mix]);
$x
, $y
, $blur
and $mix
are optional arguments. I already mentioned that I will call this function inside keyframes so I need to be able to optionally change them. $mix
will allow to add a second color so the shadow fades from one to the other.
Here’s a demo on CodePen:
See the Pen netflix logo (figure--3) by Gregor Adams (@pixelass) on CodePen.
Since I have created all the parts I need, I can now create the animation.
I am using two variables $offset
and $trans
which I have already defined above. The animation has 3 stages, so I can carefully decide when it reaches a certain point.
@keyframes pop-out {
0% {
transform: if($offset == 0, scale(1, 1), scale(95.9 - abs($offset) * 10, 1))
if($offset == 0, translatey(0%), rotatey($trans));
text-shadow: d3(15, rgba($c_3d, 0), 0, 0), d3(50, rgba($c_shadow, 0), 0, 0);
}
50% {
transform: if(
$offset == 0,
scale(1.2, 1.2),
scale(126.2 - abs($offset) * 10, 1.2)
) if($offset == 0, translatey(-16%), rotatey($trans));
text-shadow: d3(15, $c_3d, if($offset == 0, 0, -0.25px * $offset), 1px), d3(50, $c_shadow, 1px, 3px, 3px, $c_shadow-mix);
}
100% {
transform: if(
$offset == 0,
scale(1.1, 1.1),
scale(116.2 - abs($offset) * 10, 1.1)
) if($offset == 0, translatey(-12%), rotatey($trans));
text-shadow: d3(15, $c_3d, if($offset == 0, 0, -0.25px * $offset), 1px), d3(50, $c_shadow, 1px, 3px, 3px, $c_shadow-mix);
}
}
Now let’s do the same thing for fading back.
@keyframes fade-back {
0% {
transform: if(
$offset == 0,
scale(1.1, 1.1),
scale(116.2 - abs($offset) * 10, 1.1)
) if($offset == 0, translatey(-12%), rotatey($trans));
text-shadow: d3(15, $c_3d, if($offset == 0, 0, -0.25px * $offset), 1px), d3(50, $c_shadow, 1px, 3px, 3px, $c_shadow-mix);
}
20% {
transform: if(
$offset == 0,
scale(1.05, 1.05),
scale(105.9 - abs($offset) * 10, 1.05)
) if($offset == 0, translatey(-7%), rotatey($trans));
text-shadow: d3(15, rgba($c_3d, 0), 0, 0), d3(50, rgba($c_shadow, 0), 0, 0);
}
100% {
transform: if($offset == 0, scale(1, 1), scale(95.9 - abs($offset) * 10, 1))
if($offset == 0, translatey(0%), rotatey($trans));
text-shadow: d3(15, rgba($c_3d, 0), 0, 0), d3(50, rgba($c_shadow, 0), 0, 0);
}
}
I also needed to provide an animation to change the color.
@keyframes change-color {
0% {
color: $c_bg;
}
100% {
color: $c_fg;
}
}
Now we can chain these animations like so:
animation-name: pop-out, fade-back, change-color;
animation-duration: 4s, 2s, 0.1s;
animation-delay: 0s, 2s, 3.2s;
The code above is just an approximate example. Each letter has a different delay and duration. You can see the final implementation here Netflix animation in pure CSS
Final notice: I added some magic to retrigger the animation in pure CSS but that’s something I might explain in another article.
I am never really happy with my experiments and while writing this article I found several ways how I could improve the code and effect.
I rewrote the entire Sass code prior to writing this article but I still feel that I can improve some parts. That is the main reason why I never stop making experiments. It just makes me smarter and bends my thoughts in directions I never knew existed.
I barely make use of techniques like these in real-life projects but I very often use the functions I needed to implement that effect. Anyway, I hope you enjoyed this article.
]]>Let me start with a couple of things I should have told myself more often during this journey.
I should have kept this in mind during those months, and so should you if you happen to write a book yourself.
The first thing I can tell is that you will run out of time, I guarantee it. And this, no matter how long your editor gives you to write your book. There is never enough time, because it is never finished.
Trust me, you will always find things to improve. I found myself writing new sections and examples a couple of hours before handing the book back to the editor. And if I had a couple of extra days, I’d have spent those until I needed three more…
Our work is never over. — Daft Punk, Harder Better Faster Stronger
The last revision I made of the book, just before it went to press, involved not less than 250 editions. There is no such thing as “too much proofreading”, or even “too much work” on such a colossal project. You could work on it forever.
My advice would be: plan well, and start early. Don’t be like "6 months is huge, I have plenty of time". When the due date comes closer, you’ll regreat not having spent more time on your work before.
I spent the last week reading what I wrote over and over to make sure it was okay. I’ve read the whole book from A to Z at least 5 times, and I’ve spent countless hours on some sections (hey Grid Layout fucker). I was sick and tired of reading my own write-ups and I think it’s perfectly normal after months of working on the same thing.
At some point, you’ll get paranoid about what you come up with. “What if it sounds dull? What if it makes no sense? What if it’s not interesting enough?”.
It’s okay. Keep in mind people won’t be reading your book over and over again, and most of them will go pretty quickly through it, not paying extra attention to every single word. Don’t put too much on yourself.
When Eyrolles came to me asking if I would like to write a book, I was like “yeeepeee! it’s like a really long article!”. I thought I would just have to sit in front on the keyboard, unleash the beast and be done with 300 pages of writing.
Nope. It doesn’t work like this. There is the fun part: writing content. And there is all the boring stuff that comes with: taking screenshots, indexing everything, making sure content flows correctly…
Also you have to make sure people are okay with you using their work in your book. Cool demo? Make sure the author is okay. Beautiful photo? Ask the photographer if he’s cool with you using it. This is not fun, this is annoying. But it has to be done.
Writing a book is a colossal task. Don’t think you’ll be able to manage it all by yourself from A to Z without any external help. You won’t.
Asking for help is not weakness, it’s perfectly normal. Also, asking for help doesn’t devalue your work whatsoever (as long as you don’t ask someone to do all the work for you, which would obviously be stupid).
Not sure of something? Ask someone to review it. Need some information? Ask on Twitter. Need a piece of advice regarding something specific? Find an expert on the topic and ask them. They will feel flattered, you will have your information: win/win.
When you come close to the end, ask someone to proof-read the whole book (yes, it takes some time). I had the amazing opportunity to have Raphaël Goetter, Geoffrey Crofte and Thomas Zilliox reading the whole thing. It surely helped a lot having something well tied and finished.
Having a much better taste for design than me, I asked my girlfriend to review all the images from the book, making sure the screenshots were actually appealing.
Long story short, have your work reviewed. It will pay.
About two months after we launched the project, I started getting anxious. Sleep started to elude me and I had trouble chilling. I had already some solid bases for my book, yet I was entering a phase where everything was started but nothing was finished. It was scary as shit.
After a couple of weeks and with the deadline coming closer, as suprising as it may be, I started feeling more peaceful. I got more and more comfident in my work, feeling like I was actually building something good. This wasn’t scary anymore, it was exciting.
Even during my last week, I wasn’t stressed. I worked 5 hours every night after my 8-hours day, and spent 30 hours proofreading everything during the last week-end, without feeling a tiny bit anxious. At some point, stress doesn’t bring anything good to the game anymore.
Use your stress to get your shit done, but don’t let it overwhelm you. Just keep calm and keep working. Soon enough, there won’t be anxiety anymore. Soon enough, there will be a book waiting for you. :)
]]>Actually, it was so welcome that some lovely folks started translating it in different languages. It is currently available in English, French, Spanish, Polish, Russian, Korean, Chinese. German, Italian, Portuguese, Danish, Dutch, Czech and Greek.
Anyway, managing different languages as part of a Jekyll powered site turned out to be quite an interesting challenge in order to keep everything scalable, so I thought why not writing about this. Hence you reading this.
A translation of Sass Guidelines consists on a folder named after the language code of the translation, for instance en
for English, or cz
for Czech. This folder should contain all 18 chapters in Markdown (one file per chapter) as well as an index.md
file to import them all.
For instance, the French translation looks like this:
fr/
|- _architecture.md
|- _author.md
|- _comments.md
|- _conditions.md
|- _contributing.md
|- _errors.md
|- _extend.md
|- _introduction.md
|- _loops.md
|- _mixins.md
|- _naming.md
|- _rwd.md
|- _sass.md
|- _syntax.md
|- _tldr.md
|- _toc.md
|- _tools.md
|- _variables.md
`- index.md
However I did not want each translation’s index to be in charge of importing the chapters in the correct order. What if I want to switch the position of two chapters? Having to update all index.md
is not very convenient. Furthermore, some chapters are separated by the donate partial. This should not be language-specific but a global configuration.
Thus, I found a way to keep index.md
clean and tidy, like so:
---
layout: default
language: fr
---
{% include chapters.html %}
That’s it. The only difference between the French index and the Polish index is the language
variable in the YAML Front Matter. Everything else is handled by chapters.html
.
This file (living in the _includes
folder) is in charge of including all chapters from the current page language in the right order, including the donate partials. Thanks to include_relative
tag, it gets extremely easy to do:
{% include_relative _author.md %}
{% include_relative _contributing.md %}
{% include_relative _toc.md %}
{% include_relative _sass.md %}
{% include_relative _introduction.md %}
{% include_relative _syntax.md %}
{% include donate.html %}
{% include_relative _naming.md %}
{% include_relative _comments.md %}
{% include_relative _architecture.md %}
{% include_relative _rwd.md %}
{% include donate.html %}
{% include_relative _variables.md %}
{% include_relative _extend.md %}
{% include_relative _mixins.md %}
{% include_relative _conditions.md %}
{% include donate.html %}
{% include_relative _loops.md %}
{% include_relative _errors.md %}
{% include_relative _tools.md %}
{% include_relative _tldr.md %}
{% include donate.html %}
This tag from Jekyll makes it possible to include a file not from the _includes
folder but from the current folder. Now this is where it’s getting tricky: while chapters.html
lives in _includes
, {% include_relative %}
doesn’t include from the _includes
folder but from the folder where lives the requested page (including chapters.html
), for instance fr/
.
That’s pretty much how it works.
Now, content is not everything [citation needed]. There are also some UI components to translate, such as the baseline, the footer and the donate partial.
In a matter of convenience, all UI translations live in a translations.yml
file in the _data
folder so they can be accessed from the views. This file is structured as follow:
en:
donate:
content: 'If you enjoy Sass Guidelines, please consider supporting them.'
button: 'Support Sass Guidelines'
baseline:
content: 'An opinionated styleguide for writing sane, maintainable and scalable Sass.'
footer:
content: 'Made with love by [Kitty Giraudel]()'
note: 'Note'
# Other languages…
At this point, it is a breeze to access to this content from a partial, such as donate.html
.
<div class="donate">
<div class="donate__content">
<p>{{ site.data.translations[page.language].donate.content }}</p>
<a
href="https://gum.co/sass-guidelines"
target="_blank"
rel="noopener noreferrer"
class="button"
>
{{ site.data.translations[page.language].donate.button }}
</a>
</div>
</div>
Easy peasy! It works exactly the same for the baseline, the footer and pretty much any UI component we want to translate to the current language. Pretty neat, right?
If you have checked one of the currently available translations, you may have noticed a message right under the baseline introducting the translators and warning about outdated information. Obviously, this is not manually computed. Actually, data is pulled from another YML file, languages.yml
this time, looking like this:
fr:
version: 1.0.0
label: French
prefix: /fr/
available: true
translators:
- name: Pierre Choffé
link: https://la-cascade.io/
# Other languages…
I am sure you have figured out where this is going. We only need a partial included within the layout itself (since it is always there). Let’s call it translation-warning.html
. One thing before jumping on the code: we need to display a completely different message on the English version. I took this as an opportunity to tell people Sass Guidelines are being translated in other languages so they can switch from the options panel.
{% if page.language == "en" %}
<div class="translation-warning">
<p>
The Sass Guidelines project has been translated into several languages by
<a
target="_blank"
rel="noopener noreferrer"
href="https://github.com/KittyGiraudel/sass-guidelines/blob/gh-pages/_data/languages.yml"
>generous contributors</a
>. Open the
<span data-toggle="aside" class="link-like" role="button" aria-expanded
>options panel</span
>
to switch.
</p>
</div>
{% else %} {% capture translators %}{% for translator in
site.data.languages[page.language].translators %}<a
href="{{ translator.link }}"
target="_blank"
rel="noopener noreferrer"
>{{ translator.name }}</a
>{% if forloop.last == false %}, {% endif %}{% endfor %}{% endcapture %}
<div class="translation-warning">
<p>
You are viewing the {{ site.data.languages[page.language].label }}
translation by {{ translators }} of the original
<a href="/">Sass Guidelines</a> from
<a target="_blank" rel="noopener noreferrer" href="">Kitty Giraudel</a>.
</p>
<p>
This version is exclusively maintained by contributors without the review of
the main author, therefore might not be completely up-to-date{% if
site.data.languages[page.language].version != site.data.languages.en.version
%}, especially since it is currently in version {{
site.data.languages[page.language].version }} while the
<a href="/">English version</a> is in version {{
site.data.languages.en.version }}{% endif %}.
</p>
</div>
{% endif %}
Okay, that might look a little complicated. Worry not, it is not as complex as it looks. Let’s leave aside the English part since it is fairly obvious, to focus on the {% else %}
block. The first thing we need is to compute a string from the array of translators with have in our YML file. This is what the {% capture %}
tag does.
A YML such as:
gr:
version: 1.0.0
label: Greek
prefix: /gr/
available: false
translators:
- name: Adonis K.
link: https://github.com/varemenos
- name: Konstantinos Margaritis
link: https://github.com/kmargaritis
… will be captured as this HTML string
<a href="https://github.com/varemenos">Adonis K.</a>,
<a href="https://github.com/kmargaritis">Konstantinos Margaritis</a>
Then this HTML string can be safely used as part of our paragraph with {{ translators }}
.
The second paragraph is intended to warn against outdated information. To make it quite clear when a version is obsolete, we compare the English version (stored in the languages.yml
) with the current language’s version. If the latter is lower, then it means the translation is outdated, in which case we explicitly say it.
I am still looking for extra languages, such as Japanese, Norwegian, Swedish, Finnish, and so on. If you speak one of these languages or know someone who would like to translate Sass Guidelines, please be sure to get in touch!
]]>Why would we care?, you think. Wait, I am not done yet. They have been taught to use tables for layout. It’s 2015 and some web teacher in some public school is teaching their student to use HTML tables for layout and Dreamweaver as a working environment. This is wrong on so many levels.
Then I thought, okay, maybe the teacher is more like a design person than an actual developer. I was wrong all again. The teacher forbade them to use bright colors because it would look unprofessional. Along the same lines, the teacher advised them to have a 800px wide centered container, good ol' fashion style. Please.
Have you ever taken over a project only to find the code looks like it has been vomitted by some weird Godzilla? Me too. When it happens, we use to think “god, the developer was a mess”. Not entirely wrong, I suppose. But if they have done things like this, it is probably because someone has told them to do it like this.
Coding is slowly becoming an important skill that is being taught in many trainings, even when remotely related to web. Some schools even introduce code to very young children. I think this is amazing. Not only because I am a developer myself, but because I do think having basic coding skills is valuable in everyday’s like now that Internet is everywhere.
For instance, having a basic knowledge of what is the Internet, what is a browser, how it works, what are the essential languages to build websites, what is a database, what are the risks of giving sensible informations online, how to spot poor phishing… This would prevent situations where people feel the urge to reboot their computer when there is a JavaScript issue on a web page, or give their credit card informations on phishing websites.
Meanwhile, many people end up saying they don’t like to code. Understandable, this is a quite specific discipline. However more often than not, I think people don’t cling to coding because they are being very poorly taught. Of course you cannot enjoy writing CSS if no one even bother explaining you the box model. This is the fucking base.
There are few people who like code for what it is. I am one of those people, but that does not matter. Most people see code as an end, not the end. To make them enjoy coding, you need to give them a real project. Something they want to achieve. A goal. It could be anything: a portfolio, a little app to write cooking recipes, a game, whatever as long as it’s stimulating.
My little cousin was asked to do a cooking website. She had to scrap some recipes from Marmiton.org, then put them as a website. While the idea makes sense, I still don’t think it is a good one. This is certainly not something exciting for her, hence I don’t see how she could enjoy building this site. And, oh, she didn’t.
A more suitable exercise would be to ask each student to pick a recipe of their own, and display it the way they want as long as they write valid markup, cross-browser styles, everything powered by a well-thought design.
This would be much better than asking them to have 25 HTML files with the header, footer and sidebar repeated in all of them. This teaches nothing, and certainly does not reflect the way we actually build websites.
Frontend development, and more generally web development, witnessed a tremendous evolution since its early stages. Because of this, people whose job is to teach web development should be aware that we don’t build website the way we did 10 years ago. Or 5 years ago. Things have changed, and teaching should change as well.
You don’t teach people to build websites using tables for layout or Dreamweaver as an editor. You give them a project to think about, and teach them the basics: box model, and valid markup. You don’t teach aspiring web developer Flash or Flex. You tell them about mobile-first design. Git. JavaScript. Grids. Postprocessors and preprocessors. Tooling.
Tell them about what they will use when building websites and applications. Not what you used when you started. This is likely to be outdated. Stop fucking up code learning.
]]>What a clever little experiment it was, yet I can’t say I am completely fond of the way it has been implemented. Not only colors are restricted between #000000
(00:00:00) and #235959
(23:59:59), but the JavaScript part did not really please me. So here is my try.
There are two things I wanted to give specific attention to:
Alright, let’s go.
See the Pen Color Clock by Kitty Giraudel (@KittyGiraudel) on CodePen.
Let’s start with a little skeleton for our application:
;(function () {
'use strict'
// Our main function
function colorClock() {
// …
}
// Call our function every second
var timer = setInterval(colorClock, 1000)
})()
Nothing special here: at every second, we call the colorClock
function. This function will have to do three things:
Displaying the current time is probably the easiest part of the exercise. Although I must say I got helped by a StackOverflow answer.
function colorClock() {
// …
function dateToContent(date) {
return date.toTimeString().replace(/.*(\d{2}:\d{2}:\d{2}).*/, '$1')
}
var date = new Date()
document.body.innerHTML = dateToContent(date)
}
Let’s tackle the actual challenge. My thought process was as follow. Our time is made of 3 components: hours, minutes and seconds. A color is made of 3 components: red, green and blue channels. If I convert each component to a 255 value, I can have a color from the current time (where hours are converted to red, minutes to green and seconds to blue).
Alright. The first thing we need is to compute our color channels based on the current time. To do so, we need a RGBFromDate
function that takes an instance of Date
, and returns an array of 3 channels expressed as (rounded) numbers between 0 and 255.
function RGBFromDate(date) {
return [
(date.getHours() / 24) * 255,
(date.getMinutes() / 60) * 255,
(date.getSeconds() / 60) * 255,
].map(function (e) {
return Math.round(e)
})
}
At this point, we have everything we need to apply the color to the body.
var date = new Date()
var channels = RGBFromDate(date)
document.body.style.backgroundColor = 'rgb(' + channels.join(',') + ')'
Last but not least, we need to find a way to change the font color if the background color is too dark or too light, so the text remains readable at all time. To do this, we have to compute the luminance of a color. If it is higher than .7
, then the color is very bright and text should be black.
function colorLuminance(red, green, blue) {
return (0.299 * red + 0.587 * green + 0.114 * blue) / 256
}
function colorFromRGB(red, green, blue) {
return colorLuminance(red, green, blue) > 0.7 ? 'black' : 'white'
}
document.body.style.color = colorFromRGB.apply(this, channels)
That’s it. Here is the final code:
;(function () {
'use strict'
function colorClock() {
// Get RGB channels from a date
function RGBFromDate(date) {
return [
(date.getHours() / 24) * 255,
(date.getMinutes() / 60) * 255,
(date.getSeconds() / 60) * 255,
].map(function (e) {
return Math.round(e)
})
}
// Get color luminance as a float from RGB channels
function colorLuminance(red, green, blue) {
return (0.299 * red + 0.587 * green + 0.114 * blue) / 256
}
// Get font color from RGB channels from background
function colorFromRGB(red, green, blue) {
return colorLuminance(red, green, blue) > 0.7 ? 'black' : 'white'
}
// Get formatted date
function dateToContent(date) {
return date.toTimeString().replace(/.*(\d{2}:\d{2}:\d{2}).*/, '$1')
}
var date = new Date()
var channels = RGBFromDate(date)
document.body.style.color = colorFromRGB.apply(this, channels)
document.body.style.backgroundColor = 'rgb(' + channels.join(',') + ')'
document.body.innerHTML = dateToContent(date)
}
var t = setInterval(colorClock, 1000)
})()
You can play with the code on CodePen:
See the Pen Color Clock by Kitty Giraudel (@KittyGiraudel) on CodePen.
Hope you liked it!
]]>“I SHOULD WRITE SASS GUIDELINES!”
— Me, in the shower.
After two days working on them, I am very proud and excited to release a 10000 words long styleguide on working with Sass: sass-guidelin.es.
Game on, folks! @SassCSS guidelines, just for you: https://t.co/8ybeXdBOFY.
— Kitty Giraudel (@KittyGiraudel) January 6, 2015
I think we have been on a need for Sass guidelines for months now. Here is my shot at it. However note that this document is very opinionated. This is a styleguide, the styleguide.
In it, I tackle almost all aspects of the Sass language: colors, strings, nesting, variables, mixins, extend, warnings, architecture, tools… I may have missed something, but I would be glad to complete it with your ideas.
I worked like crazy for two days to have a first version that is good enough to be released. I think I nailed it. Now, we can always improve things. For instance, some people have been complaining about the use of double quotes, which seem to be a pain to type on an american keyboard. Fair enough. I opened a pull request to move to simple quotes instead.
Similarly, there is Ian Carrico who seems a bit upset by my agressive no @extend
rule. While this is an opinionated document, I feel like I can still round up the edges and make things a little better for everybody so I need to rewrite the section about extending selectors.
Also, and I need your help with this, for such a styleguide to make sense, it has to get popular. It has already received some good vibes yesterday thanks to all your tweets (especially CSS-Tricks and Smashing Magazine, let’s be honest).
Tweet it, upvote it on Hacker News and reddit and above all: tell me what you think. This is the only way for me to improve it.
Last but not least, if this project helps you getting started with Sass, if it helps your team staying consistent or if you simply like it, consider supporting the project. Like CSS Guidelines, this document is completely free. Still, it took a lot of time to write it and will take even more keeping it up-to-date. Anyway, if you want to pay me a beer through Gumroad, that would be awesome. :)
]]>The end of the year often means looking back at the last dozen of months to see what has been done and what’s not. Because of this, this article is mostly personal; kind of a note to myself to keep track of what I have done this year.
Note: let me take this occasion as an opportunity to suggest you read this article from Eric Meyer about this whole “your-year-on-our-network” thing. Quite heartbreaking, definitely insightful to us all designers and developers.
Early 2014, I officially started writing as a freelancer. Since then, my secondary activity has been giving me the ability to invoice sites for articles, including SitePoint and Tuts+.
Along the same lines, I have written a total of 91 articles published on 6 different sites in 2014. It is not quite but very close to 2 articles a week. I am very proud of such a pace and hope I’ll be able to keep writing at a decent rate, if not as much.
In May, I have given my very first talk out of France, in Bruxelles, Belgium. It was almost 1000km away for a 30 minutes talk during a 2 hours event. Needless to say, I expected much from it! It went very well. Belgium is a very welcoming country, if you ever wondered.
Later this year, in November, I gave my first talk in English, in Paris for dotCSS. A 15-20 minutes long talk in front of 400 attendees at the Théâtre des Variétés. Amazing venue, great people, it was an incredible experience.
Apparently I wrote a book this year. It is not published yet, will be in January 2015 if no mistake. Anyway I spent over 7 months working on it with the help of Raphaël Goetter, so I think it deserves a big spot on the list of things I’ve done in 2014.
The book will be entitled “CSS 3, Pratique du Design Web” (French for “CSS 3 for Web Design”), and published by French editor Eyrolles. Oh, and the preface is from Chris Coyier himself.
An article specifically about this will be released soon.
Since June, I have been working on SassDoc, a documentation tool for Sass, with Pascal Duez, Valérian Galliat and Fabrice Weinberg.
SassDoc has been very welcomed, and some massive projects already use it to manage their API docs, including The Guardian, Bourbon, Neat and Origami from Financial Times. We are thrilled to see what such interesting projects do with our own.
Well, aside from releasing my book, I’d like to keep doing talks possibly in English now that I know I am capable of speaking in English in front of people. My goal would be a talk per quarter, but we all know timing is hard so we’ll see.
SassDoc should also hit v2 with major changes in the next few weeks, and we have a lot of plans for v2.1 and further. Our goal for this year is to make it the go-to tool for Sass documentation, let’s hope we achieve this.
Also, I should change job in the next few months, at the very beginning of quarter 2 if everything plays right but I think I’ll deal with this in another article when it’s all settled down.
What about you my friends, what have you been up to in 2014?
]]>A single-line end-ellipsis is often used when you have some long content that you want to hide behind a …
to prevent a line-break. It is very easy to do. You can implement it with:
/**
* 1. Hide any overflow
* 2. Prevent any line-break
* 3. Add ellipsis at end of line
*/
.ellipsis {
overflow: hidden; /* 1 */
white-space: nowrap; /* 2 */
text-overflow: ellipsis; /* 3 */
}
For instance, consider this content:
The answer to life, the universe, and everything is 42.
If you have some restricted width and applies the .ellipsis
class:
The answer to life, the univer…
Now what if you want to display the end of content and add ellipsis at beginning of line? Something like:
…niverse, and everything is 42.
That is what I call a reverse ellipsis, although I suspect CSS specifications to call it start ellipsis since the current value for text-overflow
is actually called end-overflow-type. Anyway, now it’s your turn. I have created a pen if you want to play with the initial code:
See the Pen 5582f35c9596c40ae947bad2f5993fb2 by Kitty Giraudel (@KittyGiraudel) on CodePen.
Beware, next content is spoiler!
Many of you have been advising using direction: rtl
as a magic solution.
I suspect all of you who suggested this to run Firefox, in which it does work like a charm (well, kind of). Unfortunately, Firefox is the only browser behaving correctly in right-to-left with text-overflow: ellipsis
.
That being said, I am not sure why but Firefox does eat the full stop at the end of content. It doesn’t not happen with another character as far as I can tell. If someone has an explanation for this, please report.
In other browsers, especially Chrome, the start ellipsis is correctly displayed but not the end of content. It leads to something like:
…The answer to life, the univer
No luck. :(
So there is no magic one-liner to make it work everywhere. Fortunately, some of you are very creative and came up with smart hacks to achieve the desired effect. The best solution given so far is the one from Michael Godwin:
.reverse-ellipsis {
text-overflow: clip;
position: relative;
background-color: white;
}
.reverse-ellipsis:before {
content: '\02026';
position: absolute;
z-index: 1;
left: -1em;
background-color: inherit;
padding-left: 1em;
margin-left: 0.5em;
}
.reverse-ellipsis span {
min-width: 100%;
position: relative;
display: inline-block;
float: right;
overflow: visible;
background-color: inherit;
text-indent: 0.5em;
}
.reverse-ellipsis span:before {
content: '';
position: absolute;
display: inline-block;
width: 1em;
height: 1em;
background-color: inherit;
z-index: 200;
left: -0.5em;
}
A couple issues with Michael’s solution:
.reverse-ellipsis
(here a span
);That being said, it is — as far as I can tell — the only solution I have seen that does work even if content does not overflow. All over solutions always display the ellipsis, even when content does fit within the container, which is a bit agressive, yielding something like:
… Here is some short content.
This is far from ideal, and Michael’s solution prevents this so congratulations to Michael Godwin.
See the Pen NPNZRx by Godwin (@Godwin) on CodePen.
Cheers to all of you who tried, and if you come up with something better, please be sure to share. ;)
]]>The following is a guest post by David Khourshid about how he managed to build a specificity calculator in Sass. In all honesty, I would not have made any better than David with this, so I have to say I am very glad to have him talking about his experiment here.
As any web developer who has to write CSS knows, specificity is both an important and confusing concept. You might be familiar with principles such as avoiding nesting and IDs to keep specificity low, but knowing exactly how specific your selectors are can provide you valuable insight for improving your stylesheets. Understanding specificity is especially important if you are culpable of sprinkling !important
throughout your CSS rules in frustration, which ironically, makes specificity less important.
TL;DR: Check out the source (and examples) here on SassMeister or directly on GitHub.
In short, specificity determines how specific a selector is. This might sound like a tautology, but the concept is simple: rules contained in a more specific selector will have greater weight over rules contained in a less specific selector. This plays a role in the cascading part of CSS, and ultimately determines which style rule (the one with the greatest weight) will be applied to an element. Specifically, specificity of a selector is the collective multiplicity of its simple selector types.
There are plenty of articles that further explain/simplify specificity:
The algorithm for calculating the specificity of a selector is surprisingly simple. A simple selector can fall into three types:
Compound and complex selectors are composed of simple selectors. To calculate specificity, simply break apart your selector into simple selectors, and count the occurances of each type. For example:
#main ul li > a[href].active.current:hover {
}
…has 1 ID (type A) selector, 2 class + 1 attribute + 1 pseudo-class (type B) selector, and 3 element type (type C) selectors, giving it a specificity of 1, 4, 3
. We’ll talk about how we can represent this accurately as an integer value later.
Now that we have our basic algorithm, let’s dive right in to calculating specificity with Sass. In Sass 3.4 (Selective Steve), one of the major new features was the addition of many useful selector functions that might have seemed pretty useless…
…until now. (Okay, I’m sure people have found perfectly good uses for them, but still.)
First things first, let’s determine what our API is going to look like. The simpler, the better. I want two things:
Great; our API will look like this, respectively:
/// Returns the specificity map or value of given simple/complex/multiple selector(s).
/// @access public
/// @param {List | String} $initial-selector - selector returned by '&'
/// @param {Bool} $integer - output specificity as integer? (default: false)
/// @return {Map | Number} specificity map or specificity value represented as integer
@function specificity($selector, $integer) {
}
/// Outputs specificity in your CSS as (invalid) properties.
/// Please, don’t use this mixin in production.
/// @access public
/// @require {function} specificity
/// @output specificity (map as string), specificity-value (specificity value as integer)
@mixin specificity() {
}
Looks clean and simple. Let’s move on.
Consider a simple selector. In order to implement the algorithm described above, we need to know what type the simple selector is - A, B, or C. Let’s represent this as a map of what each type begins with (I call these type tokens):
$types: (
c: (':before', ':after', ':first-line', ':first-letter', ':selection'),
b: ('.', '[', ':'),
a: ('#')
);
You’ll notice that the map is in reverse order, and that’s because of our irritable colon (:
) - both pseudo-elements and pseudo-classes start with one. The less general (pseudo-element) selectors are filtered out first so that they aren’t accidentally categorized as a type B selector.
Next, according to the W3C spec, :not()
does not count towards specificity, but the simple selector inside the parentheses does count. We can grab that with some string manipulation:
@if {
$simple-selector: str-slice($simple-selector, 6, -2);
}
Then, iterate through the $types
map and see if the $simple-selector
begins with any of the type tokens. If it does, return the type.
@each $type-key, $type-tokens in $types {
@each $token in $type-tokens {
@if str-index($simple-selector, $token) == 1 {
@return $type-key;
}
}
}
As a catch-all, if none of the type tokens matched, then the simple selector is either the universal selector (*
) or an element type selector. Here’s the full function:
@function specificity-type($simple-selector) {
$types: (
c: (':before', ':after', ':first-line', ':first-letter', ':selection'),
b: ('.', '[', ':'),
a: ('#')
);
$simple-selector: str-replace-batch($simple-selector, '::', ':');
@if {
$simple-selector: str-slice($simple-selector, 6, -2);
}
@each $type-key, $type-tokens in $types {
@each $token in $type-tokens {
@if str-index($simple-selector, $token) == 1 {
@return $type-key;
}
}
}
// Ignore the universal selector
@if str-index($simple-selector, '*') == 1 {
@return false;
}
// Simple selector is type selector (element)
@return c;
}
Fair warning, this section might get a bit mathematical. According to the W3C spec:
Concatenating the three numbers a-b-c (in a number system with a large base) gives the specificity.
Our goal is to represent the multiplicity of the three types (A, B, C) as a (base 10) integer from a larger (base ??) number. A common mistake is to use base 10, as this seems like the most straightforward approach. Consider a selector like:
body nav ul > li > a + div > span ~ div.icon > i:before {
}
This complex selector doesn’t look too ridiculous, but its type map is a: 0, b: 1, c: 10
. If you multiply the types by 102, 101, and 100 respectively, and add them together, you get 20. This implies that the above selector has the same specificity as two classes.
This is inaccurate.
In reality, even a selector with a single class should have greater specificity than a selector with any number of (solely) element type selectors.
I chose base 256 (162) to represent two hexadecimal digits per type. This is historically how specificity was calculated, but also lets 256 classes override an ID. The larger you make the base, the more accurate your (relative) specificity will be.
Our job is simple, now. Multiply the multiplicity (frequency) of each type by an exponent of the base according to the map (a: 2, b: 1, c: 0)
(remember - type A selectors are the most specific). E.g. the selector #foo .bar.baz > ul > li
would have a specificity type map (a: 1, b: 2, c: 2)
which would give it a specificity of 1 _ 2562 + 2 _ 2561 + 2 * 2560 = 66050. Here’s that function:
@function specificity-value($specificity-map, $base: 256) {
$exponent-map: (
a: 2,
b: 1,
c: 0
);
$specificity: 0;
@each $specificity-type, $specificity-value in $specificity-map {
$specificity: $specificity + ($specificity-value * pow($base, map-get($exponent-map, $specificity-type)));
}
@return $specificity;
}
Thankfully, with Sass 3.4's selector functions, we can split a selector list comprised of complex and compound selectors into simple selectors. We’re going to be using two of these functions:
selector-parse($selector)
to split a selector list into a list of selectors;simple-selectors($selector)
to split each compound/complex selector into a list of simple selectors.Some points to note: I’m using a homemade str-replace-batch
function to remove combinators, as these don’t count towards specificity:
$initial-selector: str-replace-batch(#{$initial-selector}, ('+', '>', '~'));
And more importantly, I’m keeping a running total of the multiplicity of each simple selector using a map:
$selector-specificity-map: (
a: 0,
b: 0,
c: 0
);
Then, I can just use my previously defined function selector-type
to iterate through each simple selector ($part
) and increment the $selector-specificity-map
accordingly:
@each $part in $parts {
$specificity-type: specificity-type($part);
@if $specificity-type {
$selector-specificity-map: map-merge(
$selector-specificity-map,
(
#{$specificity-type}: map-get(
$selector-specificity-map,
$specificity-type
) + 1
)
);
}
}
The rest of the function just returns the specificity map (or integer value, if desired) with the highest specificity, determined by the specificity-value
function, by keeping track of it here:
$specificities-map: map-merge(
$specificities-map,
(specificity-value($selector-specificity-map): $selector-specificity-map)
);
Here’s the full function:
@function specificity($initial-selector, $integer: false) {
$initial-selector: str-replace-batch(#{$initial-selector}, ('+', '>', '~'));
$selectors: selector-parse($initial-selector);
$specificities-map: ();
@each $selector in $selectors {
$parts: ();
$selector-specificity-map: (
a: 0,
b: 0,
c: 0
);
@each $simple-selectors in $selector {
@each $simple-selector in simple-selectors($simple-selectors) {
$parts: append($parts, $simple-selector);
}
}
@each $part in $parts {
$specificity-type: specificity-type($part);
@if $specificity-type {
$selector-specificity-map: map-merge(
$selector-specificity-map,
(
#{$specificity-type}: map-get(
$selector-specificity-map,
$specificity-type
) + 1
)
);
}
}
$specificities-map: map-merge(
$specificities-map,
(specificity-value($selector-specificity-map): $selector-specificity-map)
);
}
$specificity-value: max(map-keys($specificities-map)...);
$specificity-map: map-values(map-get($specificities-map, $specificity-value));
@return if($integer, $specificity-value, $specificity-map);
}
So, aside from this being another application of a rethinking of Atwood’s Law, knowing exactly how specific your selectors are can be much more beneficial than seeing in your dev tools that your desired styles have been overridden by another style for some relatively unknown reason (which I’m sure is a common frustration). You can easily output specificity as a mixin:
@mixin specificity() {
specificity: specificity(&);
specificity-value: specificity(&, true);
}
On top of this, you can find some way to communicate the specificities of your selectors to the browser in development, and output a specificity graph to ensure that your CSS is well-organized.
You can take this even further and, if you have dynamic selectors in your SCSS, know ahead of time which one will have the highest specificity:
@if specificity($foo-selector, true) > specificity($bar-selector, true) {
// …
}
The full source for the specificity functions/mixins, as well as examples, are available here on SassMeister:
]]>It was the first time that organisers of dotConferences, mostly famous for dotJS happening on November 17th, were producing a dotCSS and I have to say Sylvain Zimmer, Ferdinand Boas and Maurice Svay (as well as all the people who helped) really did a great job with this one.
I can’t tell for the attendees, but as as speaker I must say they took great care of me. Everything has been made so I, as for the others, don’t get under too much pressure once on stage and actually enjoy the experience. Challenge completed, more about that later on.
Anyway, if you felt frisky coming to dotCSS this year because it was the first edition, be sure to come next year because it was so much fun!
The conference happened at the Théâtre des Variétés in Paris, a beautiful 19th century redish and goldish theatre with balconies, great lights and everything. It was absolutely gorgeous. The stage was not huge but definitely big enough to move a bit around. I think one could not dream of a best location to give a talk.
As I said, the lineup was really appealing. Quite impressive how a new event like dotCSS was able to gather so many talented people in the same room (note that I don’t necessarily include myself in this).
The whole event was mono-track, meaning there is always a single speaker giving a presentation at a given time, which is much better this way if you ask me. And all talks were 18-minutes long.
The 18-minutes format is probably one of the best if you ask me. Not only is time management much easier than for lightning talks (4-5 minutes) and long talks (45-60 minutes), but the audience is also much more receptive.
I don’t think the attention-span for a talk is meant to last any longer than 30-something minutes. At some point, people just get bored. I feel like this point happens between 20 and 30 minutes; before if they are not interested in the topic, slightly after if the speaker is really good on stage.
Anyway, allow me to give you a quick round-up.
Daniel Glazman, co-chairman at the CSS Working Group opened the stage with a talk about how CSS got there, what were the mistakes made, and why. I was not really familiar with Daniel’s work before the event so I found his talk very insightful. Plus, he really is talented speaker with great humour, thus I could not think of a better person to open the event.
Then Kaelig, French frontend developer previously at BBC and the Guardian, now at the Financial Times, presented a very interesting talk about bridging the gap between designers and developers (essentially using Sass) in big teams such at The Guardian’s.
Kaelig was followed by Harry Roberts, with probably the less technical talk of the day (but definitely not the least interesting!): Ten Principles for Frontend Development. In this case, Harry did apply it to CSS but it ended up being a very generic talk that could apply to many languages or even professions.
Then there was some lightning talks that I did not really catch because I was backstage getting prepared, but I always have a profound respect to lightning speakers: I feel like time management is hell for so short presentations.
I came next with a deck entitled Keep Calm And Write Sass. It was an 18 minutes talk about the do’s and don’ts of using Sass, especially the don’ts. My point was to try to get people focused on Sass main role: helping writing CSS, not making codebases more complex.
Estelle Weyl then presented CSS WTF, a collection of little known facts about CSS that ended up being quite technical actually. Counters in HTML forms, SVG animations, contenteditable attribute for head elements and much more. If you like clever stuff, have a look at her deck.
After a second break, Nicolas Gallagher presented an insightful talk about scaling CSS, essentially based from his experience at Twitter. While not necessarily applicable right now in any project, it is interesting knowing how such a large-scale compary manage their CSS codebase.
The inventor of CSS, Bert Bos came next with a presentation about typography on the web, and how HTML and CSS are currently poorly suited for it. What’s funny is that Bert actually ended up (implicitly) asking the audience how they would do it, rather than coming and saying “this is how it should be done”. Food for thoughts.
Last but not least, Ana Tudor gave a talk about shapes and CSS and Sass and geometry and craziness. Her scientific brain never fails to amuse me, and as always, her presentation was very impressive.
It was my first talk in English, and as far as I can tell it went quite well. I felt absolutely no pressure thanks to the supporting organisers and speakers and everything went very smoothly.
When I came up on stage, as for other speakers, I couldn’t see a single face in the audience. Lights were all turned to the stage, and the room was kept dark, so all I could see was bright (while not blinding) light.
Interestingly enough, I realised that I feel much more confident when I don’t see people’s face. Seeing people is disturbing because you may assist to things that you don’t want to see in order to provide a clear talk.
So facing a black wall was actually much easier than expected. It allowed me to keep tracks of my thoughts without being disturbed. Loved it.
Anyhow, things went great from what I can tell. There were two screens right below the stage, one with the timer, one mirroring the current slide (so speakers don’t have to turn their back to the audience); both helped a lot feeling safe on stage.
Now as a non-native English speaker, who never spent more than 4 days in an English-speaking country, I obviously chocked a bit once or twice but overall I feel like my English was quite understandable. Plus, this is only about practice, so it can only get better over time.
Among things I should pay attention to though:
Anyway, the event was really great, full of interesting talks and cool people. If there is another dotCSS next year, chances are high that you’ll see me there if I can attend it.
If you missed my talk (or anyone’s talk actually), worry not because everything will be online in a couple of weeks. Meanwhile, you can have a look at my slidedeck; feel free to get in touch for any question. Also, special thanks to Jesterhead who designed the first slide for me.
]]>But before we get too far, let me turn it over to Tim to catch us up on some basic knowledge regarding Bezier functions. Tim, please.
(Note: if you're only interested in the code, please head straight to CodePen.)
In computer graphics, creating curves was quite a complex task. In 1959, physiscist and mathematician Paul de Casteljau, who worked at Citroën, developed an algorithm that helped create curves for car designs. Mathematician Pierre Bézier adopted the algorithm to design the curvatures for Renault. In 1962 Pierre widely publicised what we now know as the Bézier curve.
The curve is used across many fields in computer graphics. Most digitally produced curves are made using this algorithm. Think of your car, phone or the font you’re reading. It was also adapted in less visible fields, like easing transitions.
The most known implementations are vector paths in 2D and 3D graphics software. A path usually consists of many points connected by lines. Each line gets one or two “control points” to that determine the curvature.
To create Bézier curves, a basic understanding of linear interpolation is required.
Linear interpolation (or lerping), is finding a point between two points (or along a line). It’s likely you’ve done this before, but in one dimension.
Imagine you have two numbers, 3 and 7. Supose you needed to find the number exactly between these numbers. The difference between 7 and 3 is 4. Adding half of 4 to 3 makes 5. So 5 is the correct answer. That’s exactly how linear interpolation works, but instead of dealing with numbers, we’re dealing with points in a 2D or 3D space, thus you have to do this two or three times.
To lerp, we need to know two points, and a number that indicates the progress along the line. This number is a decimal between 0 and 1 and indicates how far along the line the result should be. Just multiply this number with the difference between the two points. The start would be 0, the end is 1 and 0.5 would be halfway on the line.
The first step is to get the difference between the two points:
(p1 - p0)
Then we need to multiply the difference with that third number I just explained:
(p1 - p0) * t
Finally, the minimal value is added:
p = (p1 - p0) * t + p0
To get a point along a curved path, we do this for every line along the path. If the path consists of more than two lines, we get two or more interpolated points, and multiple points makes a line which can be lerped as well. This simply repeats until we have one single point left.
Lets try this with three points:
p0 = (0, 0)
p1 = (0.4, 0.8)
p2 = (1, 1)
t = 0.4
// First iteration
i0 = lerp(p0, p1, t)
i1 = lerp(p1, p2, t)
// Second, final iteration
p = lerp(i0, i1, t)
A Bézier of three points is called a Quadratic Bézier.
We can do this for four points as well which requires a third iteration:
p0 = (0, 0)
p1 = (0.3, 0)
p2 = (0.6, 0.8)
p3 = (1, 1)
t = 0.6
// First iteration
i00 = lerp(p0, p1, t)
i01 = lerp(p1, p2, t)
i02 = lerp(p2, p3, t)
// Second iteration
i10 = lerp(i00, i01, t)
i11 = lerp(i01, i02, t)
// Third, final iteration
p = lerp(i10, i11)
A curve of four points is called a Cubic Bézier.
As stated before, the amount of points is irrelevant, so we can use five points, or six, or seven!
As you add more points, the more coordinates play a part in the final curve, making it increasingly difficult to add desired detail. This is why the Cubic Bézier curve, the curve with four points is most common. If you’re familiar with software like Illustrator, you will know that between two points, you get two control points, which makes four.
Note: if you haven't already, I highly recommand you to watch this 4 minute video about the way Bézier curves are drawn by a computer. Fair warning: ah-ha moment triggerer.
Okay, at this point you should be in a pretty good shape to understand cubic Bézier functions. But why is this related to Sass in any way? Well, CSS transitions and animations heavily rely on cubic Bezier functions. Both transition-timing-function
and animation-timing-function
properties support a cubic Bézier function described with cubic-bezier()
.
Such a function can be reprensented on a 2-axis plan with the transition/animation progression along the Y axis, and the time along the X axis. A curve is then drawen on the graph, representing the timing function.
This is what we wanted to achieve. Although we wanted a very simple API, something like:
.grid {
@include cubic-bezier(0.32, 1, 0.53, 0.8);
}
Basically exactly like the cubic-bezier
function from CSS. We can also add an extra argument to pass a map of options if defaults are not convenient enough (see below for an explanation of the available options):
.grid {
@include cubic-bezier(
0.32,
1,
0.53,
0.8,
(
'control-points': true,
'informations': true,
'size': 300px,
'color': #999,
'details': 64,
)
);
}
Let's see how we did it.
Luckily, Sass provides basic mathematical operations like addition, subtraction, multiplication and division. Enough to create some basic curves. Because CSS lacks a drawing API to draw lines, I chose to use box-shadow on one element to mark some points along the path, generated using Sass.
It all starts with linear interpolation. I already showed you how that works.
/// Linear interpolation
/// @author Tim Severien
/// @param {Number} $a
/// @param {Number} $b
/// @param {Number} $p
/// @return {Number} Return a number between `$a` and `$b`, based on `$p`
@function lerp($a, $b, $p) {
@return ($b - $a) * $p + $a;
}
However, Sass doesn’t do arithmetic operations on maps or lists. Linear interpolation only works with numbers, so an extra function is required to lerp on each axis, assuming a point is a list of two numbers:
/// Linear interpolation points
/// Arithmatic operators only work for numbers, so lerp the X and Y axis seperately
/// @author Tim Severien
/// @param {Number} $a
/// @param {Number} $b
/// @param {Number} $p
/// @return {List}
@function lerp-point($a, $b, $p) {
@return lerp(nth($a, 1), nth($b, 1), $p), lerp(nth($a, 2), nth($b, 2), $p);
}
At this point, we have to apply the interpolation. Remember that a the amount of points for a curve is irrelevant, and that you can recursively calculate the interpolated points? This all looks very similar to the well-known reduce()
function; just do [something] until there’s one left. In this case, that something is lerping.
/// Bezier Reduce
/// @author Tim Severien
/// @param {List} $points
/// @param {Number} $p
/// @return {Number}
@function bezier-reduce($points, $p) {
// Keep lerping until one point is left
@while length($points) > 1 {
// Temporary list containing the newly lerped points
$tmp: ();
// Iterate through all (current) points
@for $i from 1 to length($points) {
// Add lerped point to the temporary list
$tmp: append(
$tmp,
lerp-point(nth($points, $i), nth($points, $i + 1), $p)
);
}
// Replace old points by new interpolated list
$points: $tmp;
}
@return nth($points, 1);
}
All that remains now is generating a sequence of points to display the graph and to generate the shadows:
/// Bezier shadow
/// @param {List} $points - List of points from Bezier
/// @param {Number} $detail - Number of particles
/// @output box-shadow
/// @author Tim Severien
@mixin bezier-shadow($points, $detail) {
// Create a list of shadows
$shadow: ();
@for $i from 0 to $detail {
// Get the point at $i / $detail
$point: bezier-reduce($points, $i / $detail);
// Create a new shadow for current point
$shadow: append($shadow, nth($point, 1) nth($point, 2), comma);
}
box-shadow: $shadow;
}
I won't dig too much into the code since it's mostly writing CSS at this point, still I'll explain the logic behind our API, especially the cubic-bezier
mixin, dealing with configuration, and such.
@mixin cubic-bezier($x1, $y1, $x2, $y2, $options: ()) {
$options: map-merge(
(
// Enable/disable control-points
'control-points': true,
// Extra informations
'informations': true,
// Size of the grid
'size': 300px,
// Color scheme
'color': #999,
// Points from the curve
'points': ($x1, $y1, $x2, $y2),
// Number of dots on the curve
'detail': 30
),
$options
);
@include draw-system($options);
}
As you can see, this mixin only deals with configuration. All it does is merging the given configuration, if any, with the default one. Then, it calls the draw-system
mixin with the configuration as only parameter.
@mixin draw-system($conf) {
width: map-get($conf, 'size');
height: map-get($conf, 'size');
position: relative;
color: map-get($conf, 'color');
border-left: 2px solid;
border-bottom: 2px solid;
border-top: 1px dashed;
border-right: 1px dashed;
@if map-get($conf, 'informations') {
&::after,
&::before {
position: absolute;
bottom: -1.75em;
text-transform: uppercase;
font-size: 0.75em;
}
@if map-has-key($conf, 'name') {
// Display name
&::before {
content: "#{map-get($conf, 'name')}";
left: 0;
}
}
// Display values
&::after {
content: "#{map-get($conf, 'points')}";
right: 0;
}
}
// Print the curve
> * {
@include draw-curve($conf);
}
}
If the informations
key from options map is truthy, it means we have to display function's informations under the graph. To do this, nothing like pseudo-elements: ::before
for the name (if there is a name), and ::after
for the function parameters (e.g. 0.42, 0, 0.58, 1
).
Then, it calls draw-curve
mixin.
@mixin draw-curve($conf) {
// Print the wrapper
@include draw-curve-wrapper($conf);
// Print the dots
@include draw-dots($conf);
// Print the control-points
@if map-get($conf, 'control-points') {
@include draw-control-points($conf);
}
}
We'll skip draw-curve-wrapper
since it does nothing more than a couple of boring CSS lines. Moving on to draw-dots
. This is where Tim's work and mine meet.
@mixin draw-dots($conf) {
$points: map-get($conf, 'points');
$size: map-get($conf, 'size');
&::after {
content: '';
@include circle(4px);
@include absolute($left: 0, $top: 0);
@include bezier-shadow(
(
0 $size,
(nth($points, 1) * $size) ((1 - nth($points, 2)) * $size),
(nth($points, 3) * $size) ((1 - nth($points, 4)) * $size),
$size 0
),
map-get($conf, 'detail')
);
}
}
Regarding draw-control-points
now, it gets called only if control-points
key from map is truthy. Controls points are the blue and red dots, as well as the lines joining the dots to the corners of the graph.
The way they are drawn are kind of tricky I must say (also quite complicated, so I won't display the code here). Basically, it consists on two pseudo-elements with their diagonal displayed thanks to a linear-gradient and some geometry magic using atan
function (from Compass).
(Note: if you don't use Compass, you can use this (Ruby) implementation from Sassy-Math, or this (Sass) one from Ana Tudor.)
This expirement was fun, but really not very useful in practice. It can give you an idea of what a besier curve looks like and how it is manipulated, but it probably won't change your life if you write Sass for your day job.
If you need to create your own cubic-bezier animation function,this tool from Lea Verou would probably be more useful.
I hope you enjoyed this experiment. You can play with the code on CodePen:
See the Pen Cubic Bezier functions visualize by Kitty Giraudel (@KittyGiraudel) on CodePen.
]]>[A]ny application that can be written in JavaScript, will eventually be written in JavaScript.
Not only is this quote famous by now, but it also turned to be quite true. JavaScript grew from this weird little nerd to the cool kid we all know it is today. What Jeff didn’t know back then perhaps, is how his law can apply to other things as well.
That’s why today, I hope he won’t mind if I expand his thought to declare the revisited Atwood’s law (calling it Atwood-Giraudel would be quite presomptuous):
[A]ny application that can be written in Sass, will eventually be written in Sass.
And given my obsession for Sass, I’ll go even further and add this extra part to the quote, even if it won’t ever be retained:
… and chances are high that it will be done by me.
Disclaimer: as for the original law from Jeff Atwood, it is obvious that Sass (or JavaScript) is not always the best choice: more often than not, things should be done in a different way, but the fact that we can usually makes use do it nevertheless.
Sass is 7 years old if no mistake, and has come a long since its early days. In 7 years, and especially because of silly people like me loving doing crazy shits, a lot of stuff has been made in Sass already. Let’s see:
And there are countless more examples I’m probably not even aware of.
I think the main reason is it’s challenging. Because Sass is a very limited language, doing advanced things can turn out to be quite challenging. And as we all know, challenge is fun.
Aside from being fun to write, it actually helps a lot understanding the language. I would not be that skilled with Sass if I had stopped after declaring a couple of variables and functions. You don’t get good by doing what everybody does. You get good by pushing the limits.
I think I could not stress this enough: try things folks. Do silly stuff. The only rule is to remember what is an experiment and what belongs to production code. Don’t use experimental/crazy code in a live code base. It doesn’t smell good.
Any application that can be written in Sass, will eventually be written in Sass. And we are already close to the end.
]]>As far as I am concerned, I am no accessibility expert, so I always find this kind of initiatives very helpful. To briefly introduce a11y.css, it is a stylesheet that you can include in any web page to highlight possible mistakes, errors and improvements. Each notification comes with a message (displayed with pseudo-elements) explaining what’s going on and what should be done. Cool stuff, really.
I thought it was too bad to keep it exclusively in French so I opened an issue to suggest a Sass solution (project was already running on Sass anyway) to provide messages in different languages. I am very happy with what I have come up hence this article to explain how I did it.
The goal was not to switch the whole thing to English. I think Gaël wanted to keep French and in the meantime provide an English version. So the idea was to find a way to generate a stylesheet per language. Feel like adding Spanish? Go for it, should be a breeze.
My idea was to have a .scss
file per language, following a pattern like a11y-<language>.scss
for convenience that gets compiled into a a11y-<language>.css
file. This file shouldn’t contain much. Actually only:
@charset
(obviously to UTF-8
);fr
or en
);For instance, a11y-en.scss
would look like:
@charset "UTF-8";
@import 'utils/all';
@include set-locale('en');
@import 'a11y/a11y';
Looking pretty neat, right?
You’ve seen from the previous code snippet that we have a set-locale
mixin accepting a language (shortcut) as a parameter. Let’s see how it works:
/// Defines the language used by `a11y.css`. For now, only `fr` and `en` allowed.
/// @group languages
/// @param {String} $language
/// @output Nothing
/// @example scss - Defines the language to `fr`.
/// @include set-locale('fr');
@mixin set-locale($language) {
$supported-languages: 'fr', 'en';
$language: to-lower-case($language);
@if not index($supported-languages, $language) {
@error "Language `#{$language}` is not supported. Pull request welcome!";
}
$language: $language !global;
}
There is very little done here. First, it makes sure the given language is supported. For now, only fr
and en
are. If it is not supported, it throws an error. Else, it creates a global variable called $language
containing the language (fr
or en
). Easy, let’s move on.
The point of this system is to gather all messages within a big Sass map. Thus, we don’t have dozens of strings scattered across stylesheets. Every single message, no matter the language, lives inside the $messages
map. Then, we’ll have an accessor (a getter function) to retrieve a message from this map depending on the global language.
Gaël has divided messages in different themes: errors
, advices
or warnings
. This is the first level of our map.
$messages: (
'errors': (),
'advices': (),
'warnings': ()
);
Then each theme gets mapped to a sub-map (second level) containing keys for different situations. For instance, the error
telling that there a missing src
attribute on images:
[src] attribute missing or empty. Oh, well…
… is arbitrary named no-src
.
$messages: (
'errors': ('no-src': ()),
'advices': (),
'warnings': ()
);
And finally, this key is mapped to another sub-map (third level) where each key is the language and each value the translation:
$messages: (
'errors': ('no-src': ('fr': 'Attribut [src] manquant ou vide. Bon.', 'en':
'[src] attribute missing or empty. Oh, well…')),
'advices': (),
'warnings': ()
);
However fetching fr
key from no-src
key from errors
key from $messages
map would look like:
$message: map-get(map-get(map-get($messages, 'errors'), 'no-src'), 'fr')));
This is both ugly and a pain in the ass to write. With a map-deep-get
function, we could shorten this to:
$message: map-deep-get($messages, 'errors', 'no-src', 'fr');
Much better, isn’t it? Although having to type the language over and over is not very convenient. And we could also make sure errors
is a valid theme (which is the case) and no-src
is a valid key from theme errors
(which is also the case). To do all this, we need a little wrapper function. Let’s call it message
, in all its simplicity:
/// Retrieve message from series of keys
/// @access private
/// @param {String} $theme - Either `advice`, `error` or `warning`
/// @param {String} $key - Key to find message for
/// @requires $messages
/// @return {String} Message
@function message($theme, $key) {
$locale: if(global-variable-exists('language'), $language, 'en');
@if not index(map-keys($messages), $theme) {
@error "Theme `#{$theme}` does not exist.";
}
@if not index(map-keys(map-get($messages, $theme)), $key) {
@error "No key `#{$key}` found for theme `#{$theme}`.";
}
@return map-deep-get($messages, $theme, $key, $locale);
}
The message
function first deals with the language. If a global variable called language
exists — which is the case if set-locale
has been called — it uses it, else it falls back to en
. Then, it makes sure arguments are valid. Finally, it returns the result of map-deep-get
as we’ve seen above.
So we could use it like this:
img:not([src])::after {
content: message('errors', 'no-src');
}
Pretty cool! Although having to type content
everywhere could be avoided. Plus, Gaël uses !important
in order to make sure the messages are correctly being displayed. Let’s have a message
mixin wrapping around message
function!
/// Get a message from the translation map based on the defined language.
/// The message contains the icon associated to the message type.
/// @group languages
/// @param {String} $theme - Theme name
/// @param {String} $key - Key name
/// @require {function} message
/// @output `content`, with `!important`
/// @example scss - Get message for `no-src` from `errors` category when language is set to `en`
/// .selector {
/// @include message('errors', 'no-src');
/// }
/// @example css - Resulting CSS
/// .selector {
/// content: '[src] attribute missing or empty. Oh, well…';
/// }
@mixin message($theme, $key) {
content: message($theme, $key) !important;
}
Same arguments. No logic. Nothing but the content
property with !important
. Thus we would use it like this:
img:not([src])::after {
@include message('errors', 'no-src');
}
We’re done. It’s over!
Cases where we need a translation system in Sass are close to zero, but for a11y.css this work proves to be useful after all. Adding a new language, for instance German, is as easy as adding a de
key to all messages in the $messages
map, and adding de
to $supported-languages
within set-locale
mixin.
That’s it! Anyway, have a look at a11y.css, contribute to this awesome project and share the love!
]]>It looks like this: major.minor.patch
(e.g. 1.3.37
). Major version is for API changes and backward incompatibilities, minor is for backward compatible features and patch is for bug fixes.
npm using Semantic Versioning for its packages, it is no surprise we use it at SassDoc. Meanwhile, we have seen quite a few suprises regarding our version bumps, so I thought I would clarify some things in a short article.
1.0.0
?We have started working on SassDoc mid-June and released the stable version of 1.0.0
on July 17th according to npm, so we basically took a month for the launch.
When we were first talking about 1.0.0
, someone told us it was too soon because the projet needed to mature a bit first.
While it makes sense in some way, I think releasing a stable version after a month of such a small project as SassDoc isn’t too soon, especially when 4 developers have been working on it.
The project mature as we are working on it and as people start using it. There is no need to wait weeks or months before launching it: we need feedbacks. And you don’t get feedbacks when project is on 0.4.3
.
Version 1.1.0
came on July 20th (3 days after 1.0.0
). Version 1.2.0
has been released on August 11th (announced on Tuts+ the next day). Version 1.3.0
came one week later, on August 18th, and version 1.4.0
has been launched 2 days later, on August 20th. Finally, version 1.5.0
(latest stable as of writing) came on August 25th.
So indeed, between August 10th and August 25th, we went from 1.1.0
to 1.5.0
. So what?
Here is how we plan versions minor versions: we have a list of features we’d like to work on. Small features are planned for the next minor version, while features that require a reasonable amount of work are delayed for 1 or 2 versions.
Version 1.2.0
has been quite long to build because we released a major feature: custom themes and templates. Not only did this required to build a whole theming engine, but we also had to make sure the data structure we hand over to the theme is fixed and documented so that people are able to build their own themes right away.
But for other minor versions, we just group a couple of features and bundle them together once they are ready. There is no need to wait a specific amount of time. I suppose we could release one version every two weeks as agile methodology dictates, but I’m not sure that would help us whatsoever.
In the end, we’ve seen some positive effects with this feature-rush. People seem enthusiastic about SassDoc and willing to get their hands on a project that is being improved on a daily basis.
2.0.0
in no time!And so what? Is there some specific rule telling that v2 should happend like one year after v1? Here is the thing: we push as many things in v1 as possible as long as they do not introduce backward incompatible changes. When this won’t be possible anymore, we’ll move on to the next major version.
For instance, if we ever come up with a way to allow both invisible comments and C-styles comments, chances are high that we will break something. Thus, we push it back to 2.0.0
. It may be in 2.0.0
or 2.4.0
, we don’t know.
Along the same line, we are considering providing a way to document BEM architecture (@module
, @element
…) but since this is likely to be one of the biggest feature we’ve ever shipped, we’ll probably break something; probably minor, but still. So this is delayed to ~2.0.0
.
Meanwhile, while we’re able to add new features without breaking the API, we keep going. I can already tell there will be a 1.6.0
that we are currently working on (bringing YAML configuration on the tabl)e, and while I don’t exclude a 1.7.0
, I think we will jump on 2.0.0
at this point.
Well, this is wrong for starter. Plus, when you release a minor version every 3 days, you are less likely to have bug reports. Anyway, when we find a bug in stable version, we immediately push a patch (hence 1.1.1
to 1.1.6
, 1.3.1
, 1.3.2
, 1.4.1
), and we’ll keep doing so.
We’ve been working like crazy on SassDoc lately because not only is this Node project very fun to work on, but we’ve realized our 4-people crew is working quite well. Each of us have some special skills that fit very well with others.
Plus, we have noticed people were really interested in having a powerful tool to document their Sass projects. We only hope SassDoc will soon be the go-to tool for this.
By the way, we need feedbacks. And opinions. Consider joining us on the repository to chat on opened issues!
]]>“Why do we have to learn algebra, Miss? We’re never going to use it…”
—Everyone in my maths class bit.ly/UaM2wf
As far as I can see, Harry uses a carousel to display quotes about his work on his home page. Why use JavaScript when we can use CSS, right? So he uses a CSS animation to run the carousel. That sounds like a lovely idea, until you have to compute keyframes…
Below is Harry’s comment in his carousel component:
Scroll the carousel (all hard-coded; yuk!) and apply a subtle blur to imply motion/speed. Equation for the carousel’s transitioning and delayed points in order to complete an entire animation (i.e. 100%):
where n is the number of slides, x is the percentage of the animation spent static, and y is the percentage of the animation spent animating.
This carousel has five panes, so:
To work out y if we know n and decide on a value for x:
If we choose that x equals 17.5 (i.e. a frame spends 17.5% of the animation’s total time not animating), and we know that n equals 5, then y = 3.125:
Static for 17.5%, transition for 3.125%, and so on, until we hit 100%.
If we were to choose that x equals 15, then we would find that y equals 6.25:
If y comes out as zero-or-below, it means the number we chose for x was too large: pick again.
N.B. We also include a halfway point in the middle of our transitioning frames to which we apply a subtle blur. This number is derived from:
where a is the frame in question (out of n frames). The halfway point between frames 3 and 4 is:
I’m pretty sure this is all a mess. To any kind person reading this who would be able to improve it, I would be very grateful if you would advise :)
And the result is:
@keyframes carousel {
0% {
transform: translate3d(0, 0, 0);
filter: blur(0);
}
17.5% {
transform: translate3d(0, 0, 0);
filter: blur(0);
}
19.0625% {
filter: blur(2px);
}
20.625% {
transform: translate3d(-20%, 0, 0);
filter: blur(0);
}
38.125% {
transform: translate3d(-20%, 0, 0);
filter: blur(0);
}
39.6875% {
filter: blur(2px);
}
41.25% {
transform: translate3d(-40%, 0, 0);
filter: blur(0);
}
58.75% {
transform: translate3d(-40%, 0, 0);
filter: blur(0);
}
60.3125% {
filter: blur(2px);
}
61.875% {
transform: translate3d(-60%, 0, 0);
filter: blur(0);
}
79.375% {
transform: translate3d(-60%, 0, 0);
filter: blur(0);
}
80.9375% {
filter: blur(2px);
}
82.5% {
transform: translate3d(-80%, 0, 0);
filter: blur(0);
}
100% {
transform: translate3d(-80%, 0, 0);
filter: blur(0);
}
}
Holy moly!
Before even thinking about Sass, let’s lighten the animation a little bit. As we can see from the previous code block, some keyframes are identical. Let’s combine them to make the whole animation simpler:
@keyframes carousel {
0%,
17.5% {
transform: translate3d(0, 0, 0);
filter: blur(0);
}
19.0625% {
filter: blur(2px);
}
20.625%,
38.125% {
transform: translate3d(-20%, 0, 0);
filter: blur(0);
}
39.6875% {
filter: blur(2px);
}
41.25%,
58.75% {
transform: translate3d(-40%, 0, 0);
filter: blur(0);
}
60.3125% {
filter: blur(2px);
}
61.875%,
79.375% {
transform: translate3d(-60%, 0, 0);
filter: blur(0);
}
80.9375% {
filter: blur(2px);
}
82.5%,
100% {
transform: translate3d(-80%, 0, 0);
filter: blur(0);
}
}
Fine! That’s less code to output.
Keyframes are typically the kind of things you can optimize. Because they are heavily bound to numbers and loop iterations, it is usually quite easy to generate a repetitive @keyframes
animation with a loop. Let’s try that, shall we?
First, bring the basics. For sake of consistency, I kept Harry’s variable names: n
, x
and y
. Let’s not forget their meaning:
$n
is the number of frames in the animation$x
is the percentage of the animation spent static for each frame. Logic wants it to be less than 100% / $n
then.$y
is the percentage of the animation spent animation for each frame.$n: 5;
$x: 17.5%;
$y: (100% - $n * $x) / ($n - 1);
Now, we need to open the @keyframes
directive, then a loop.
@keyframes carousel {
@for $i from 0 to $n {
// 0, 1, 2, 3, 4
// Sass Magic
}
}
Inside the loop, we will use Harry’s formulas to compute each pair of identical keyframes (for instance, 41.25% and 58.75%):
$current-frame: ($i * $x) + ($i * $y);
$next-frame: (($i + 1) * $x) + ($i + $y);
Note: braces are completely optional here, we just use them to keep things clean.
And now, we use those variables to generate a keyframe inside the loop. Let’s not forget to interpolate them so they are correctly output in the resulting CSS (more informations about Sass interpolation on Tuts+).
#{$current-frame,
$next-frame} {
transform: translateX($i * -100% / $frames);
filter: blur(0);
}
Quite simple, isn’t it? For the first loop run, this would output:
0%,
17.5% {
transform: translate3d(0%, 0, 0);
filter: blur(0);
}
All we have left is outputing what Harry calls an halfway frame to add a little blur effect. Then again, we’ll use his formula to compute the keyframe selectors:
$halfway-frame: $i * ($x / 1%) + ($i - 1) * $y + ($y / 2);
#{$halfway-frame} {
filter: blur(2px);
}
Oh-ho! We got an error here!
Invalid CSS after "": expected keyframes selector (e.g. 10%), was "-1.5625%"
As you can see, we end up with a negative keyframe selector. This is prohibited by the CSS specifications and Sass considers this a syntax error so we need to make sure this does not happen. Actually, it only happens when $i
is 0
, so basically on first loop run. An easy way to prevent this error from happening is to condition the output of this rule to the value of $i
:
@if $i > 0 {
#{$halfway-frame} {
filter: blur(2px);
}
}
Error gone, all good! So here is how our code looks so far:
$n: 5;
$x: 17.5%;
$y: (100% - $n * $x) / ($n - 1);
@keyframes carousel {
@for $i from 0 to $n {
$current-frame: ($i * $x) + ($i * $y);
$next-frame: (($i + 1) * $x) + ($i + $y);
#{$current-frame,
$next-frame} {
transform: translate3d($i * -100% / $frames, 0, 0);
}
$halfway-frame: $i * ($x / 1%) + ($i - 1) * $y + ($y / 2);
@if $i > 0 {
#{$halfway-frame} {
filter: blur(2px);
}
}
}
}
So far so good? It works pretty well in automating Harry’s code so he does not have to compute everything from scratch again if he ever wants to display —let’s say— 4 slides instead of 5, or wants the animation to be quicker or longer.
But we are basically polluting the global scope with our variables. Also, if he needs another carousel animation elsewhere, we will need to find other variable names, and copy the whole content of the animation into the new one. That’s definitely not ideal.
So we have variables and possible duplicated content: perfect case for a mixin! In order to make things easier to understand, we will replace those one-letter variable names with actual words if you don’t mind:
$n
becomes $frames
$x
becomes $static
$y
becomes $animating
Also, because a mixin can be called several times with different arguments, we should make sure it outputs different animations. For this, we need to add a 3rd parameter: the animation name.
@mixin carousel-animation($frames, $static, $name: 'carousel') {
$animating: (100% - $frames * $static) / ($frames - 1);
// Moar Sass
}
Since it is now a mixin, it can be called from several places: probably the root level, but there is nothing preventing us from including it from within a selector. Because @
-directives need to be stand at root level in CSS, we’ll use @at-root
from Sass to make sure the animation gets output at root level.
@mixin carousel-animation($frames, $static, $name: 'carousel') {
$animating: (100% - $frames * $static) / ($frames - 1);
@at-root {
@keyframes #{$name} {
// Animation logic here
}
}
}
Rest is pretty much the same. Calling it is quite easy now:
@include carousel-animation(
$frames: 5,
$static: 17.5%
);
Resulting in:
@keyframes carousel {
0%,
17.5% {
transform: translateX(0%);
filter: blur(0);
}
19.0625% {
filter: blur(2px);
}
20.625%,
38.125% {
transform: translateX(-20%);
filter: blur(0);
}
39.6875% {
filter: blur(2px);
}
41.25%,
58.75% {
transform: translateX(-40%);
filter: blur(0);
}
60.3125% {
filter: blur(2px);
}
61.875%,
79.375% {
transform: translateX(-60%);
filter: blur(0);
}
80.9375% {
filter: blur(2px);
}
82.5%,
100% {
transform: translateX(-80%);
filter: blur(0);
}
}
Mission accomplished! And if we want another animation for the contact page for instance:
@include carousel-animation(
$name: 'carousel-contact',
$frames: 3,
$static: 20%
);
Pretty neat, heh?
That’s pretty much it. While Harry’s initial code is easier to read for the human eye, it’s really not ideal when it comes to maintenance. That’s where Sass can comes in handy, automating the whole thing with calculations and loops. It does make the code a little more complex, but it also makes it easier to maintain and update for future use cases.
You can play with the code on SassMeister:
]]>I am glad to have Ezekiel Gabrielse today, dropping some Sass knowledge on how to build a powerful Sass API to customize the feel and look of elements. Fasten your belts peeps, this is quite intense!
Hey people! I am the creator of a relatively new Sass grid-system called Flint, and a lightweight Compass extension called SassyExport, which we will be discussing throughout this series.
Since I already mentioned the word series, this article will be the first post of a 2 part series. Today we’re going to create a Sass-powered customization API that can be plugged into a frontend API, such as a Wordpress theming framework or even allow live customization through JS.
Today’s discussion will focus on the Sass part, but it will flow straight into part 2 of this series, where we will be utilizing a brand new tool I developed called SassyExport, which allows you to export JSON from Sass and write it into a new file to use elsewhere in your projects.
Our Sass-powered customization API will essentially be able to mark elements within our stylesheet that we want to customize, and which of those elements properties may be customized as well as default values for these properties.
To be able to track all this stuff, we are going to use Sass maps to sort the output of this API by selector. Within that selector’s map, we’ll not only list its customizable properties but also the defaults for its values in case the user has not modified those.
We are going to do this all within Sass, and as we will discuss in part 2 of the series, a language like PHP or JS can hook in to our Sass-API and use the data to modify our stylesheet for these specific $selector->$property
relationships. For the sake of time, we’re going to keep this project simple and only stick to color customization.
Therefore, we will create a color palette as a map, in order to pull values from it. That way we can also hook into this palette module through our frontend API and then allow the user to modify the original color palette.
Furthermore, because we’ll be keeping track of which selectors (or if we’re getting really technical — which sub-modules) are using which color, we can then update their values if the user ever modifies that sub-module’s color value.
We need to create a global variable for our color palette.
We need to keep another global variable of all customizable elements with the following data:
&
);We also need to output these default values into our stylesheet, that way our mixin will have two purposes: serve as our customization API and a way to retrieve our color palette to use within the actual stylesheet.
Throughout this article I will be using another project of mine called Flint as a base. It has various helper-functions that we will be using such as selector_string()
, a Ruby function returning a stringified version of the current selector (&
) so that we can use it in interpolation (which currently isn’t possible). As well as a few others self-explanitory functions such as exists()
, is-map()
, is-list()
and map-fetch()
.
This is the end result of what we will be building today. Take a look at the code, and follow along as we go through creating this API and understanding the methodology behind it, if that’s your thing.
Play with this gist on SassMeister.
Firstly, let’s create the map for our color palette setup.
We are going to keep our colors in a sub-map called "palette" so we can keep our main API’s code more modular to allow it to work with other customizable properties than just colors.
// Customization module defaults
$customizer: (
'palette': ('primary': ('lightest': #eff3d1, 'light': #bbdfbc, 'base':
#8bb58e, 'dark': #0b3c42, 'darkest': #092226), 'complementary': ('light':
#f6616e, 'base': #f2192c, 'dark': #b40a19), 'gray': ('light':
#819699, 'base': #4b5557, 'dark': #333a3b), 'black': #131517, 'white':
#f2f9ff)
)
!global;
// Global variables
$customizer-instances: () !global;
As you can see, we have a pretty simple map of our default color palette to use within our customization API. I also created another global variable called $customizer-instances
that will keep a record of all the data from each use of the API. It’s an empty map for now.
So, let’s go ahead and move on to the next step, which is fleshing out the bones of our main mixin that we will be using to drive the API.
Before we go any further, let’s decide on how we want our API to work. To be able to jump right into the code in the rest of this article, this is what our syntax is going to look like at the end:
.selector {
@include customizer(
$args: (
color: 'white',
background: 'primary' 'darkest',
border-color: 'complementary' 'base'
),
$uses: 'palette'
);
}
In order to make the API easy to use and as close to the usual CSS syntax as possible, we’re going to require the first argument to be a map called $args
so that we can use $key->$value
pairs for each customizable property, as well as allowing multiple properties to be passed to a single instance of the mixin.
Note: If you’re unfamiliar with using maps as arguments, Kitty wrote up a pretty nifty article on that, as well as many other use-cases for maps.
The next argument will be fetching a module from within the above $customizer
map, which in this case will be our "palette" module. We’ll call this argument $uses
, as we will be fetching (using) values from it for use in our first argument, $args
.
I also want to make it fall back to outputting plain CSS if no module to use is specified, rather than erroring out we can simply @warn
the user that the mixin shouldn’t be used that way. Therefore, our API will be less frustrating to newer users that don’t happen to be using it correctly.
// Create new customizable properties, save to instance map
//
// @param {Map} $args - map of customizable property->value pairs
// @param {String | Null} $users (null) - module to pull property values from
//
// @output $property->$value pairs for each argument
@mixin customizer($args, $uses: null) {
// Make sure argument is a map
@if is-map($args) {
// Use module? Expecting module to exist
@if $uses != null {
// Check if module exists
@if exists($customizer, $uses) {
// … All is safe, let’s work with the arguments
}
// Module did not exist, throw error
@else {
@warn "Invalid argument: #{$uses}. Module was not found.";
}
}
// No module specified, expecting plain CSS
@else {
// … Since we’ll be expecting valid CSS, let’s output it here
// Warn that customization mixin shouldn’t be used without a module
@warn "The customization mixin should not be used without specifying a module to use.";
}
}
// Argument was not a map, throw error
@else {
@warn "Invalid argument: #{$args}. Argument type is not a map.";
}
}
I’ve commented the above code, but let’s go ahead and dig a little deeper into the structure of the mixin. Like I said above, the first thing we should do is check that the $args
argument is a map, and depending on the result, we’ll either throw an error, or move on.
Next, let’s check if a module was passed as the $uses
argument; if not, let’s output any $key->$value
pairs as plain CSS. Also we will throw a warning to the user to let him know that the mixin shouldn’t be used for plain CSS output.
On the other hand, if $uses
is not null
, let’s move on to check whether or not the module actually exists within our $customizer
variable (the palette map), like before we will either error out with a warning, or move forward.
Now, since we want to be able to pass multiple customizable properties into a single instance of the mixin, we need to iterate over each of those arguments. So, from within our conditional statement that checks whether or not the module exists, let’s add the following code:
// @if exists($customizer, $uses) {
// Run through each argument individually
@each $arg in $args {
// Break up argument into property->value
$property: nth($arg, 1);
$value: nth($arg, 2);
// Get values from module
@if is-list($value) or exists($customizer, $value) {
$value: ; // … We need to fetch the values from our module here;
}
// Output styles
#{$property}: $value;
}
// } @else module did not exist
In order to loop through each argument, we use an @each
loop. Within the loop, we retrieve both the $property
and the $value
using the nth()
function. Then, we check if $value
is either a list (when we’re fetching the value from a deeper sub-module such as "primary"), or that the module exists (for values that don’t have additional sub-modules, but rather a single value such as "white"). Assuming this check returns true
, we need a way to fetch these values from their deeper sub-modules; so let’s create a function for that called use-module()
.
The function is going to take two arguments, fairly similar to the arguments our main mixin takes. The first argument is a list of $args
, which we will use to fetch the value from the module we passed into $uses
in the main mixin.
Which brings us to the second argument! Since the function needs to know which module it’s fetching from, let’s create an argument called $module
.
// Return value for property based on passed module
//
// @param {List} $args - list of keys for customizable property
// @param {String} $module - module to pull property values from
//
// @return {*} - $value from $module
@function use-module($args, $module) {
$exists: true;
// Append the list of arguments to the module to pass to map-fetch
$module: join($module, $args);
// Check if sub-modules exist
// Make sure all sub-modules exist
@if length($args) > 1 {
@each $arg in $args {
@if not exists($customizer, $arg) {
$exists: false;
}
}
}
@if $exists {
// Grab value from module by passing in newly built list
@return map-fetch($customizer, $module);
} @else {
// One or more of the modules were not found, throw error
@warn "Invalid arguments: #{$module}. One or more module or sub-module not found.";
@return false;
}
}
You can see that I’m doing a few simple checks to make sure every module and sub-module exists within $customizer
map. If the argument was only a single value, then our check from the main mixin (before we even enter the function) will do just fine, but if we’re fetching from additional sub-modules, we need to make sure those exist so that we don’t get any error that would make the compilation crash.
So, our code is fully functional right now, but we haven’t kept a record of any of the data we passed, or what selectors and which of it’s properties are customizable. So, let’s go ahead and create the function needed to do that.
Remember we initialized an empty global map called $customizer-instances
? As I said, we are going to use that variable to house each instance of the mixin and keep track of the selector, which modules it uses, all of its customizable properties as well as their default values.
The function will be called new-customizer-instance()
. It will take two arguments indentical to the arguments that the main customizer()
mixin takes, and for good reason: we’re essentially going to loop over the arguments the exact same way, but instead of outputting styles for the selector, we’re going to save these variables to an $instance
map with the selectors name as the top-most key.
// Create new customizable instance
//
// @param {Map} $args - map of customizable property->value pairs
// @param {String} $module - module to pull property values from
//
// @return {Map} updated instance map
@function new-customizer-instance($args, $module) {
// Define static selector
$selector: selector-string(); // Flint Ruby function
// Empty argument map
$instance-properties: ();
// Run through each argument individually
@each $property, $value in $args {
// Merge into argument map
$instance-properties: map-merge(
$instance-properties,
('#{$property}': ('module': $module, 'value': $value))
);
}
// Create new instance map for selector, save properties
$customizer-instance: ('#{$selector}': $instance-properties);
// Merge into main map
@return map-merge($customizer-instances, $customizer-instance);
}
As you can see, we’re using the Ruby function I talked about ealier called selector-string()
, which outputs a stringified version of the &
operator in Sass. That way we can work with the selector the same way we would with any other string, which currently isn’t possible when using the normal &
operator. You can read more about that issue here.
Next, we’re going to create an empty map that is going to contain each customizable $property
and all of the data for it such as its $module
and the $value
that is used from the module.
Unlike the main mixin, we’re not going to keep track of what styles are actually outputted, but rather where those styles came from within our module ("palette"). That way, if say, the "primary" "base" color changes via our frontend API, we know that this element is using that value, so we can then update the stylesheet to reflect the change.
But, as we can tell from the function above, it’s returning a merged map, but we haven’t actually told the new map to override the global $customizer-instances
variable. Instead of making the function do that, let’s create a mixin to handle that part so we can simply include it into the main mixin where we need to. That way, if we ever needed to make small minor adjustments, we only have to update it in one area. This next mixin is going to be rather simple.
// Create new customizable instance
//
// @param {Map} $args - map of customizable property->value pairs
// @param {String} $module - module to pull property values from
//
// @return {Map} - updated instance map
@mixin new-customizer-instance($args, $module) {
$customizer-instances: new-customizer-instance($args, $module) !global;
}
All that this mixin is doing, is taking the updated instance map from the new-customizer-instance()
function, and setting the global $customizer-instances
variable to reflect that update.
Going back to our main customizer()
mixin, let’s update the code to include all of our new functions.
// Create new customizable properties, save to instance map
//
// @param {Map} $args - map of customizable property->value pairs
// @param {String | Null} $uses (null) - module to pull property values from
//
// @output $property->$value pairs for each argument
@mixin customizer($args, $uses: null) {
// Argument is not a map, throw error
@if not is-map($args) {
@warn "Invalid argument: #{$args}. Argument type is not a map.";
} @else {
// Use module? Expecting module to exist
@if $uses != null {
// Module doesn’t exist, throw error
@if not exists($customizer, $uses) {
@warn "Invalid argument: #{$uses}. Module was not found.";
} @else {
// Save arguments to instance map
@include new-customizer-instance($args, $uses);
// Run through each argument individually
@each $property, $value in $args {
// Check if sub-module exists
@if is-list($value) or exists($customizer, $value) {
// Get values from sub-module
$value: use-module($value, $uses);
}
// Sub-module did not exist, throw error
@else {
@warn "Invalid argument: #{$value}. Sub-module was not found.";
}
// Output styles
#{$property}: $value;
}
}
}
// No module specified, expecting plain CSS
@else {
// Loop through each argument individually and output
@each $property, $value in $args {
#{$property}: $value;
}
// Warn that customization mixin shouldn’t be used without a module
@warn "The customization mixin should not be used without specifying a module to use.";
}
}
}
Above, I simply added in our new functions, and if all went well, our code should be fully functional.
.selector {
@include customizer($args: (
color: 'white',
background: 'primary' 'darkest',
border-color: 'complementary' 'base',
), $uses: 'palette');
}
Everytime the customizer()
mixin is run, a new instance is created with all of the needed data.
// Updates the global instance map with the new selector,
$customizer-instances: (
".selector": (
"color": (
"module": "palette",
"value": "white",
),
"background": (
"module": "palette",
"value": ("primary", "darkest"),
),
"border-color": (
"module": "palette",
"value": ("complementary", "base"),
),
),
),
);
Then the new styles are fetched and outputted into the stylesheet.
// And outputs the selectors styles from our module,
.selector {
color: #f2f9ff;
background: #092226;
border-color: #f2192c;
}
Now that we have these variables ($customizer
and $customizer-instances
) containing a wealth of useful data, in part 2 we’ll walk through the basic syntax for SassyExport and how we’re going to use it to export all of this data into JSON. We will also discuss the various ways for this data to give opportunity to create some pretty impressive features when coupled with other languages, such as PHP.
Until next time, you can play with the customization API on SassMeister, check out SassyExport on Github, or download the gem to use with Compass in your own project.
]]>Kitty: Do you know how bitwise operators work?
Val: Yes.
Kitty: Do you think we could implement them in Sass?
Val: No.
(Loading…)
Val: Well, in fact we could.
Kitty: LET’S DO IT!
And so we did, hence a short article to relate the story as well as providing a (useless) use case. But first let’s catch up on bitwise operators, shall we?
Note: project is on GitHub. Check out SassyBitwise.
Note: I am no programmer so please kindly apologize any shortcut I could make when explaining bitwise operators.
You are probably not without knowing numbers we use in everyday life are expressed in base 10, also known as decimal. Hexadecimal is base 16. Octal is base 8. And binary is base 2. Just to name a few popular bases.
Let’s put this very simple: bitwise operators are operators for numbers expressed in their binary form. Most common bitwise operators are AND (&
), OR (|
) and NOT (~
), but there are also XOR (^
), LEFT-SHIFT (<<
) and RIGHT-SHIFT (>>
).
To illustrate this explanation, allow me to have a little example (inspired from Wikipedia):
# ~7
NOT 0111 (decimal 7)
= 1000 (decimal 8)
# 5 & 3
0101 (decimal 5)
AND 0011 (decimal 3)
= 0001 (decimal 1)
# 5 | 3
0101 (decimal 5)
OR 0011 (decimal 3)
= 0111 (decimal 7)
# 2 ^ 10
0010 (decimal 2)
XOR 1010 (decimal 10)
= 1000 (decimal 8)
# 23 << 1
00010111 (decimal 23) LEFT-SHIFT 1
= 00101110 (decimal 46)
# 23 >> 1
00010111 (decimal 23) RIGHT-SHIFT 1
= 00001011 (decimal 11)
As you can see, the idea is pretty straightforward:
1
s in 0
s, and 0
s in 1
s1
s if both are 1
s, else 0
1
if any are 1
, else 0
1
if one of 2 is 1
, else 0
n
to the leftn
to the rightIf you’re more a table kind of person:
Bit | Result | |
---|---|---|
NOT | 1 | 0 |
NOT | 0 | 1 |
Bit 1 | Bit 2 | AND | OR | XOR |
---|---|---|---|---|
1 | 0 | 0 | 1 | 1 |
0 | 1 | 0 | 1 | 1 |
0 | 0 | 0 | 0 | 0 |
1 | 1 | 1 | 1 | 0 |
Bit 1 | Bit 2 | Bit 3 | Bit 4 | Bit 5 | Bit 6 | Bit 7 | Bit 8 | |
---|---|---|---|---|---|---|---|---|
Binary | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 |
LEFT-SHIFT | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 |
RIGHT-SHIFT | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 |
So you got bitwise.
Now, we wanted to implement this in Sass. There are two ways of doing it:
We could have decided to manipulate binary strings but god knows why, we ended up implementing the mathematical equivalents of all operators. Fortunately, we didn’t have to figure out the formula (we are not that clever): Wikipedia has them.
You may think that we didn’t need a decimal to binary converter since we use math rather than string manipulation. Actually, we had to write a decimal-to-binary()
function because we needed to know the length of the binary string to compute bitwise operations.
We could have figured this length without converting to binary if we had a log()
function. And we could have made a log()
function if we had a frexp()
function. And we could have made a frexp()
function if we had bitwise operators. Do you see the problem here?
Valérian summed it up quite nicely in a Tweet:
&, | and ^ bitwise operators math formulas needs log(), but log() needs frexp() which needs bitwise operators. Fak! cc @KittyGiraudel
— Valérian Galliat (@valeriangalliat) June 4, 2014
I won’t dig into Sass code because it doesn’t have much point. Let’s just have a look at the final implementation. We have implemented each operator as a Sass function called bw-*
where *
stands for the name of the operator (e.g. and
). Except for bw-not()
which is a rather particuliar operator, all functions accept 2 arguments: both decimal numbers.
On top of that, we have built a bitwise()
function (aliased as bw()
) which provides a more friendly API when dealing with bitwise operations. It accepts any number of queued bitwise operations, where operators are quoted. For instance:
// 42 | 38 | 24
$value: bitwise(42 '|' 38 '|' 24);
So that’s not too bad. The fact that operators have to be quoted for Sass not to crash is kind of annoying, but I suppose we can live with it. Other than that, it’s pretty much like if you were doing bitwise operations in other language, except you wrap all this stuff in bitwise()
or bw()
. In my opinion, the API is pretty simple to use.
Let’s be honest: there is none. Sass is not a low-level programming language. It does not have any valid use case for bitwise operations. Meanwhile, we implemented bit flags. Bit flags is a programming technique aiming at storing several booleans in a single integer in ordre to save memory.
Here is a great introduction to bit flags but I’ll try to sum up. The idea behind bit flags is to have a collection of flags (think of them as options) mapped to powers of 2 (usually with an enum
field in C/C++). Each option will have its own bit flag.
00000000 Bin | Dec
│││││││└ 1 << 0 | 1
││││││└─ 1 << 1 | 2
│││││└── 1 << 2 | 4
││││└─── 1 << 3 | 8
│││└──── 1 << 4 | 16
││└───── 1 << 5 | 32
│└────── 1 << 6 | 64
└─────── 1 << 7 | 128
Now, let’s say option A is 1 << 0
(DEC 1) and option B is 1 << 1
(DEC 2). If we OR them:
00000001 (A)
OR 00000010 (B)
= 00000011
The result — let’s call it Z — holds both options, right? To retrieve separately A and B from Z, we can use the AND operator:
00000011 (Z)
AND 00000001 (A)
= 00000001
00000011 (Z)
AND 00000010 (B)
= 00000010
So far so good. Now what if we try to AND Z and, option C (1 << 2
).
00000011 (Z)
AND 00000100 (C)
= 00000000
The result of Z & C
isn’t equal to C
, so we can safely assume the C option hasn’t been passed.
That’s pretty much how bit flags work. Now let’s apply it to Sass as an example of SassyBitwise. First thing to do, define a couple of flags:
// Flags
$A: bw(1 '<<' 0);
$B: bw(1 '<<' 1);
$C: bw(1 '<<' 2);
$D: bw(1 '<<' 3);
We also need a mixin that would theorically accepts multiple boolean options. As a proof of concept, our mixin will accept a single argument: $options
, a number.
/// Custom mixin
/// @param {Number} $options - Bitwise encoded flags
@mixin custom-test($options) {
is-a-flag-set: bw($options '&' $A);
is-b-flag-set: bw($options '&' $B);
is-c-flag-set: bw($options '&' $C);
is-d-flag-set: bw($options '&' $D);
}
And now we call it, passing it the result of a bitwise OR operation of all our flags.
// Call
test {
@include custom-test(bw($A '|' $C '|' $D));
}
As expected, the result is the following:
test {
is-a-flag-set: true;
is-b-flag-set: false;
is-c-flag-set: true;
is-d-flag-set: true;
}
That’s it folks, SassyBitwise. No point, much fun. As always.
Note: a huge thanks to Valérian Galliat for helping me out with this.
]]>A template engine is some kind of tool helping you writing markup. Twig is the template engine coming with Symfony. Both Jekyll and Mixture uses Liquid, the template engine from Shopify. You may also have heard of Smarty, Mustache.js or Handlebars.js.
The idea behind any template engine is to have template files that can be used and reused, imported and extended in order to have a dynamic, DRY and reusable HTML architecture. In this article, I will mostly talk about Liquid because it is the one used by Jekyll and Mixture, as well as Twig which I heavily use at work.
Template engines expose global variables. In Liquid, those are mostly the ones declared in your YAML Front Matter (the header from every post). In Twig, they can be data passed from the controller, or super-global variables, whatever.
Sometimes, you need to access such variables in your JavaScript code. Let me make this as clear as possible: writing JavaScript in a template file just because you need a variable from a template is not a clean solution. At work, we had developers writing huge chunks of JavaScript in .html.twig
files because they needed some data from the controller in their JavaScript application. This sucks.
JavaScript should mostly go in .js
file. Markup should go in template files. Not the other way around. Especially not when it’s getting bigger than a couple of lines.
Let’s get back to the initial topic: on my blog, I need to execute some JavaScript snippets depending on the variables declared in the YAML Front Matter from the page I am in. For instance if the article includes a CodePen, I should be able to tell JavaScript to include CodePen JS file. If the article allows comments (which is usually the case), then JavaScript should include Disqus. If I want the article to include a table of contents at the top, then JavaScript should be aware of that and do what needs to be done.
Before moving to Mixture, I handled the problem in a rather drastic (and dirty) way: all templates included a scripts.liquid
file at the bottom. In this file, I wrapped JavaScript snippets with Liquid conditions. For instance:
{% if post.codepen %}
<script src="… source to CodePen JS file …"></script>
{% endif % } {% if post.comments %} … Disqus JavaScript snippet … {% endif %} {%
if post.tableOfContents %} … Table of contents JavaScript snippet … {% endif %}
As you can see, this is not ideal. First, JavaScript lays in a template file. We could work around the issue by moving JavaScript snippets to separate .js
files, then only include them when needed but we would possibly do several HTTP requests while a single one could be enough. Secondly, it is ugly. Very ugly.
When moving to Mixture, I took the time to think of how I would solve this issue to end up with a clean and DRY solution. The first thing I wanted to do was putting the JavaScript in a .js
file, so let’s start with that.
// app.js
;(function (global) {
var App = function (conf) {
this.conf = global.extend(
{
codepen: false,
sassmeister: false,
tableOfContent: false,
tracking: true,
ad: true,
comments: false,
layout: 'default',
},
conf || {}
)
this.initialize()
}
App.prototype.initialize = function () {
/* … */
}
global.App = App
})(window)
So what’s going on here? In a JavaScript file, in a closure, we define a new class called App
, that can be instantiated with an object of options (conf
). This one is extended with an object of default parameters. When instantiated, it automatically calls the initialize()
method. Let’s see what it does.
App.prototype.initialize = function () {
if (this.conf.tracking === true) {
this.tracking()
}
if (this.conf.ad === true) {
this.ad()
}
if (this.conf.comments === true) {
this.comments()
}
if (this.conf.codepen === true) {
this.codepen()
}
if (this.conf.sassmeister === true) {
this.sassmeister()
}
// …
}
No magic here, the initialize()
method simply calls other methods based on the configuration. We could simplify the code even more by calling the methods based on the configuration key names:
;['tracking', 'ad', 'comments', 'codepen', 'sassmeister'].forEach(
function (key) {
if (this.conf[key] === true) {
this[key]()
}
}.bind(this)
)
But it’s no big deal, we don’t really need this. And now, the other methods:
App.prototype.tracking = function () {
global._gaq = [['_setAccount', 'UA-XXXXXXXX-X'], ['_trackPageview']]
this._inject('//www.google-analytics.com/ga.js')
}
App.prototype.ad = function () {
this._inject('//engine.carbonads.com/z/24598/azcarbon_2_1_0_HORIZ')
}
App.prototype.codepen = function () {
this._inject('//codepen.io/assets/embed/ei.js')
}
App.prototype.sassmeister = function () {
this._inject('//static.sassmeister.com/js/embed.js')
}
App.prototype._inject = function (url) {
var d = document,
s = 'script',
g = d.createElement(s),
z = d.getElementsByTagName(s)[0]
g.async = true
g.src = url
z.parentNode.insertBefore(g, z)
}
All resources are loaded asynchronously thanks to the _inject
(pseudo-)private function.
We still haven’t really solved the problem yet. How are we going to pass our Liquid variables to the JavaScript? Well, this is the moment we need to get back to scripts.liquid
file. No more conditional JavaScript snippets; instead, we instanciate the App
class.
<script src="/assets/js/main.min.js"></script>
<script>
document.addEventListener('DOMContentLoaded', function() {
var app = new App({
codepen: {{ post.codepen }},
sassmeister: {{ post.sassmeister }},
layout: '{{ post.layout }}',
tracking: true,
ad: true
});
});
</script>
This is the only chunk of JavaScript in a template file. It is called on every page, once the DOM has been fully loaded. It grabs data from the YAML Front Matter in a clean and dynamic way. Than, JavaScript deals with the rest.
There you have it. A clean JavaScript application running on template variables, yet not using engine’s conditional tags or being written in a template file.
If you think of anything to improve it, be sure to share. In any case, I hope you liked it. :)
]]>Obviously I accepted and took a plane to travel 900 kilometers from home with my dearest in order to give a talk about Sass architecture. Hence a short blog post to relate how it went.
Well, it went very well! Except for the weather which was pretty ugly and the fact that my girlfriend got her phone stolen. Anyway, the trip was worth it and we enjoyed Belgium.
The Co.Station is a great spot in the heart of Bruxelles, aiming at helping startups, associations and business growing. In this case, Co.Station was hosting FeWeb’s event.
The room we were in was lovely. Completely made of white wood, perfectly lighted, very comfy. But… it was not a room made for such a crowd. We were 120, yet I think it would be best for around 80 to 90 persons. Sorry for those poor people who had to stand up for almost 2 hours, it sucks.
FeWeb’s events are usually composed of 2 successive talks, then a couple of beers (remember it’s in Belgium). Thus, Thierry Michel was giving an introduction to Sass and Compass, then I talked about architecture and components.
Both talks have received positive feedback so I guess we did the job well. However, I was kind of nervous at first so I was speaking pretty fast, making my session a couple of minutes shorter than expected.
Also when tensed I tend to speak in low voice, certainly as an attempt to calm things down. Because of that, I had to hold the mic in my hand during the whole presentation. Trust me, figuring out the distance between the mouth and the mic every single time you say something is definitely not the kind of things you want to think about.
Anyway, I eventually went through the whole session and ended my talk peacefully. The audience was receptive and we got some interesting questions (what about post-processors, Autoprefixer…), so it was pretty cool.
Once again I have made my slides with Slid.es, the CMS for Reveal.js. I even subscribed a PRO account to have access to all the cool features (offline and private decks, Dropbox sync, custom CSS, export to PDF…). I also used the presentation mode from Slid.es, which is great. Absolutely not disappointed.
I won’t walk through my slides like I did for my talk at KiwiParty 2013 because in this case it is less focused on code. Plus, I think they are better designed than the previous one.
Anyway, here they are (in French).
A warn thank to the FeWeb for their great welcome, the bear and the fries. And if you — whoever you are — were here on May 8th, thank you. If you were not, let’s hope we meet at another event.
]]>First of all, let me introduce with a little bit of background: as far as I can remember, I always liked writing. Back in high-school, I spent most French lessons writing prose or short stories. A few years later (late 2010), I launched a blog (in French) about World of Warcraft that got quite popular by the time. This was mostly due to the fact I published about 1000 words a day, and this during almost a year.
Long story short, you’d say I’m a writer. Ironically, I have never enjoyed reading. You would think someone who likes to write also likes to read but that is not my case. I don’t like reading. Especially books. I find it boring. Enough back story, let’s move on.
In this section, I’ll tell you how I went from doing CSS drawings on CodePen, to writing for SitePoint, CSS-Tricks and The Sass Way in about a year and a half. If you really just want to know how I write my articles, feel free to skip to the next section.
During the summer vacations from 2012, I got contacted by Pedro Botelho one of the two folks behind Codrops (the other one being the awesome Manoela Ilic, whom I interviewed later on this very blog) to know if I’d be interested in writing for Codrops. I was mostly unknown (not that I am specially popular today) at that time and I spent most of my free time doing silly CSS demos on CodePen.
Obviously I said yes and got to write quite a couple of posts for Codrops between September 2012 and July 2013, including some pretty popular ones like Troubleshooting CSS. After a dozen articles over a year, I realized Codrops was looking for design-related posts while I felt more technic-focused. As a matter of facts, my last posts at Codrops were quite technical (dealing with CSS counters, clip()
, click events…).
At that time (mid 2013), Chris Coyier from CSS-Tricks was looking for authors to help him complete the Almanac, an alternative to MDN Docs on CSS selectors and properties. Being a big fan of Chris' work for years now, I have been helping him as much as I could, especially with a few interesting entries like CSS Grid System, A Complete Guide to Flexbox and a few other cool properties as well.
I still keep helping Chris updating the Almanac whenever I can. I recently added a couple entries, and we will soon update the Flexbox guide if Chris is still willing me to do so. I’m learning a lot and it’s a real pleasure to contribute to this famous site that is CSS-Tricks.
David Walsh and Chris being good buddies, David asked me if I’d be interested in writing a little article about Sass for his blog as a guest author (July 2013). A couple of days later, he released Looking into the Future of Sass where I explained what’s coming up in Sass 3.3 which was on the tracks back then. Even today, this article is still getting linked to as an alternative to Sass' official changelog. Needless to say you should check the changelog rather than external articles. ;)
On the very beginning of 2014, I gathered my courage and let David release another guest post from me, this time about JavaScript. Yes, you read right! I explained how I built a CRUD JavaScript class. I spent a couple of days working on this piece of code and that was kind of a big deal for a JavaScript newbie like me to talk about it, especially at David Walsh’s which is well known for his JS skills. Thankfully I’ve got some positive feedbacks, so it’s all good!
A few months after my first article for David Walsh (October 2013), I think John W. Long from The Sass Way asked me whether I’d like publishing a write up at The Sass Way. The Sass Way being one of the most central places for Sass related stuff, I jumped on the occasion and released a completely silly post about Math sequences in Sass. While this was very interesting from a strictly technical point of view, it had absolutely no point whatsoever — making the article completely useless.
Thankfully, John gave me some extra opportunities to release more interesting articles on The Sass Way, including a cool one about how to programmatically go from one color to another which is — in my opinion — quite neat, especially if you’re interested in how colors work.
In late January 2014, I got contacted by Louis Lazaris (who was just named at a new position at SitePoint) if I wanted to fill their CSS Section with a couple of Sass articles. Louis told me SitePoint at that time was willing to provide some content about CSS preprocessors, so he thought about me (thanks Louis!).
The day after, I sent him a first article, ready to roll. And in the week-end that followed, I sent him 2 or 3 new write ups about Sass. At a point where my articles were not even passing by the Approved topics and Work in progress on Trello but directly popping into Ready for edit. For about 3 months now, SitePoint has been releasing an article from me every week and I have to say I am very glad to be part of this.
I’m busting my ass to provide interesting and fresh Sass content (when it’s not too technical, in which case I keep it for my own blog). And it’s a really great adventure so I hope months from now, I’ll still be giving them food for thoughts.
Finally, a few weeks ago (March 2014) Ian Yates from Webdesign Tuts+ got in touch with me to know if I could write a little something about Sass. A round of applause for Tuts+ because for once, someone contacted to ask me to write about something very specific, and not a about Sass or about CSS. In this case, Ian asked me to talk about error handling in Sass.
This led to the fastest turn around in history because the same day I was able to hand over the finished article to Ian, which could be released on Webdesign Tuts+ the day after. So in about 24 hours, we went from not knowing each other, to having released an article on the site. That being said, it was fast because he knew right away what he wanted from me (and because I had some free time that day).
I really enjoyed how things went the first time so I hope Webdesign Tuts+ and I will keep working together in the future.
Last but not least, shortly after starting writing for Codrops, in November 2012, I launched my own blog to write about experiments and stuff. I’ve been writing almost once a week since then, and plan on keep doing so for as long as I can.
So far, I have talked about all the places I’ve been writing for, but not really how I write. As you may have noticed, this is usually getting fast: in most cases a couple of days after establishing contact, the first article is out. Apart from the fact I have some free time during the evening where I can write, there are a couple of other reasons.
Things are easier when you know what you’re talking about. Have you ever tried to explain to someone something you barely know? It hurts. You stutter. You make sentences that don’t always make sense. You take time to think before answering… It takes time and effort. When you know your topic, it’s getting simple. You don’t have to think carefully before you speak. It comes naturally.
Remember the article about JavaScript for David Walsh? That was longer. Some Almanac entries for Chris Coyier took me days to write, especially the one about CSS Grid. But when I write about Sass (which is usually the case), it’s getting very easy. Except for the little things I still don’t get about Sass, I’m okay with talking about it for hours.
I never ever start an article without finishing it. Even this one you are currently reading. I wrote it in a single shot. If I leave an unfinished article, it will remain unfinished and won’t ever be released. I still have a draft from March 2013 which was meant to be an article about table design for Codrops.
I just can’t get back to an article I started. This might look incapacitating but I see it as a strength. Writing an article from the beginning to the end in a single session helps me keeping track of my thoughts and having a structured meaningful result.
I’ve seen friends working days on an article before delivering / releasing it. God, that would kill me. From start to end, every time. One shot.
This might seem silly, but I am a very fast typer and this is not trivial when writing a lot. I usually sit on a comfortable 80 to 100 words per minute while being able to hit up to 120 words per minute with appropriate music in the ears.
I grew up without a TV but as far as I can remember, there has always been a computer at home. When I was 3, my brother put me on Street Fighter, and I was smashing the keyboard with my little fingers without understanding much what I was doing. Before I was even 10, I started playing online. Which means typing to talk with people.
A few years later in secondary school, we had typing lessons to help us use a keyboard; it was already earned. I remember finding a website where you had to type the alphabet as fast as you can. Then, there was a scoreboard displaying best scores. After a couple of days of practice, I managed to type the entire latin alphabet in about 2-3 seconds. As silly as this exercise may be, it helped a lot rushing a couple of keys in a short amount of time.
Anyway, being a fast typer is part how I am able to release as many articles. Writing a post doesn’t take forever because I can type almost as fast as I speak.
Well, it involves Markdown, for sure. If you ask me about the greatest improvement regarding writing for the web, I’d say it’s Markdown. Being able to have structured content that doesn’t hurt reading is essential. Add a syntax highlighter and you got the holy grail of web writing. I’m not sure I’d be writing that much if it wasn’t for Markdown.
Anyway, I usually open a Github Gist or Sublime Text and start writing in Markdown mode. As I’ve explained in the previous section, once I’ve started I don’t stop until the end. It usually takes no more than a couple of minutes or dozens of minutes depending on the article’s length. If everything is not perfect at first, it’s no big deal. What’s important is I have a backbone.
Once I’m done, I proof read the whole thing as if I were discovering it for the first time. I fix typos and try to level up my English so it’s not too much pain for the reader (a.k.a you) to read. It’s very unusual I have to re-write a whole section but it happens. In this case, I just fill the blanks or update as needed.
When the content seems fine, I have another read to see if I can add extras which would make the article more appealing: quotes, images, demos (usually as a Pen or a SassMeister Gist). If there is room for those, I add them.
And finally, I hand it to the site aiming at publishing it (e.g. SitePoint) or schedule it for my own blog.
There we are folks; you know everything! There is no magic. I just love what I do thus I enjoy writing about it. That’s why I’ve been able to write about 50 articles since the beginning of the year.
When I got some free time and a cool little idea in the back of my head, I open GitHub Gist, switch to Markdown and start typing. A couple of minutes later, the article is done, and I only have to proof read.
Of course it is time consuming. Yet, I try to find some time to write, because I really enjoy it. That’s all. If you want to write, you just have to love what you do.
]]>top
, right
, bottom
and left
.
The mixin was directly inspired from Nib, Stylus' most popular framework. The idea is to be able to declare all desired offsets in a single declaration rather than having to write multiple CSS properties.
// Stylus syntax
selector {
absolute: top 1em right 100%;
}
When looking back at Nib’s documentation a couple of weeks ago, I noticed there are a couple of features I missed when implementing the Sass version of this little gem. Hence the brand new version of the mixin, and the blog post explaining the process.
Unfortunately, Sass in its SCSS syntax doesn’t provide as much abstraction as Stylus does, so we still have to use some extra characters, especially @include
, parenthesis, colons and semi-colons… That being said, the result is quite good in my opinion.
// SCSS
selector {
@include absolute(top 1em right 100%);
}
Before jumping on the code, it is important to analyze the topic so we can implement things right. There are a few different use cases, but the main idea is always the same: we loop through the 4 offsets to see if they are being passed to our mixin. Then, depending on how it’s going, various things happen. Let’s see each case one by one.
Case 1. The offset has not been found in the list. Obviously, we stop there and do not output it.
Case 2. The offset has been found at the last index of list. We output it to 0
.
// SCSS
@include absolute(top);
// CSS
position: absolute;
top: 0;
Case 3. The offset has been found and the next item is another offset. We output it to 0
.
// SCSS
@include absolute(top left);
// CSS
position: absolute;
top: 0;
left: 0;
Case 4. The offset has been found and the next item is invalid. An invalid value could be a string other than auto
, initial
and inherit
, or any value that is not a number, or a unitless number. In any case, we do not output the offset.
// SCSS
@include absolute(top 'string');
// CSS
position: absolute;
Case 5. The offset has been found and the next item is valid. Of course then, we output the offset with the next item as a value.
// SCSS
@include absolute(top 1em);
// CSS
position: absolute;
top: 1em;
So if we sum up:
0
As you may have understood from what we have just seen, we will need to determine if the value directly following the offset is a valid value for an offset property (top
, right
, bottom
or left
). Nothing better than a little function to do that.
Should be considered as a valid length:
0
auto
initial
inherit
@function is-valid-length($value) {
@return (type-of($value) == 'number' and not unitless($value)) or (index(auto initial inherit 0, $value) != false);
}
The function is as simple as that. First we check if it’s a number with a unit. If it is not, we check whether it is an allowed value. If it is not again, then it is not a valid length for an offset property.
Now that we have our helper function and all our use-cases, it is time to move on to the mixin.
@mixin position($position, $args: ()) {
$offsets: top right bottom left;
position: $position;
@each $offset in $offsets {
// Doing the magic trick
}
}
From there, we iterate through the offsets list (so 4 times) and for each one, we do the checks we discussed in the first section of this article. I added comments to the code so you can follow along but it is pretty straight forward anyway.
// All this code happens inside the loop
$index: index($args, $offset);
// If offset is found in the list
@if $index {
// If it is found at last position
@if $index == length($args) {
#{$offset}: 0;
}
// If it is followed by a value
@else {
$next: nth($args, $index + 1);
// If the next value is value length
@if is-valid-length($next) {
#{$offset}: $next;
}
// If the next value is another offset
@else if index($offsets, $next) {
#{$offset}: 0;
}
// If it is invalid
@else {
@warn "Invalid value `#{$next}` for offset `#{$offset}`.";
}
}
}
Then of course, there are still the 3 extra mixins absolute
, relative
and fixed
. This doesn’t change at all from the previous version.
@mixin absolute($args: ()) {
@include position(absolute, $args);
}
@mixin fixed($args: ()) {
@include position(fixed, $args);
}
@mixin relative($args: ()) {
@include position(relative, $args);
}
.a {
@include absolute();
}
.a {
position: absolute;
}
.b {
@include absolute(top);
}
.b {
position: absolute;
top: 0;
}
.c {
@include absolute(top right);
}
.c {
position: absolute;
top: 0;
right: 0;
}
.d {
@include absolute(top right bottom);
}
.d {
position: absolute;
top: 0;
right: 0;
bottom: 0;
}
.e {
@include absolute(top right bottom left);
}
.e {
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
}
.f {
@include absolute(top right 1em);
}
.f {
position: absolute;
top: 0;
right: 1em;
}
.g {
@include absolute(top 1em right);
}
.g {
position: absolute;
top: 1em;
right: 0;
}
.h {
@include absolute(top 1em right 100%);
}
.h {
position: absolute;
top: 1em;
right: 100%;
}
.i {
@include absolute(top right mistake);
}
.i {
position: absolute;
top: 0;
}
.j {
@include absolute(top 1em right 1em bottom 1em left 1em);
}
.j {
position: absolute;
top: 1em;
right: 1em;
bottom: 1em;
left: 1em;
}
So here we go with the new version people. It is slightly better than the old since you can now chain offsets to set them to 0
, and extra keywords like auto
, initial
and inherit
are allowed, which wasn’t the case before.
I hope you like it. If you think of anything to improve it, be sure to share!
]]>The following is a guest post by Daniel Guillan. Daniel is the co-founder and chief design officer at Vintisis. I am very glad to have him here today, writing about a clever mixin to ease the use of Modernizr with Sass.
I use Modernizr on every single project I work on. In a nutshell, it’s a JS library that helps us take decisions based on the capabilities of the browser accessing our site. Modernizr quickly performs tests to check for browser support of modern CSS and HTML implementations like CSS 3d Transforms, HTML5 Video or Touch Events among many many others.
Once it has checked for the features we intend to use, Modernizr appends classes to the <html>
tag. We can then provide a set of CSS rules to browsers that support those features and another set of fallback rules to browsers that don’t support them.
I created a Sass mixin that helps us write those rules in a DRYer and more comprehensive way, reducing the amount of code needed and making it less error-prone and far easier to read and maintain.
Before jumping into the code for the actual mixin, let’s see how we actually write Modernizr tests in plain CSS.
This is how we can write a rule-set to add a CSS3 gradient background:
.cssgradients .my-selector {
background-image: linear-gradient(to bottom, #fff, #000);
}
For browsers that don’t support CSS gradients or for those where Javascript is not available or disabled and thus we can’t test for support, we will need a fallback rule-set:
.no-js .my-selector,
.no-cssgradients .my-selector {
background-image: url('gradient.png');
background-repeat: repeat-x;
}
Sass allows selectors and rules to be nested so we can make that code prettier and much more organized, avoiding repetition of the selector:
.my-selector {
.cssgradients & {
background-image: linear-gradient(to bottom, #fff, #000);
}
.no-js &,
.no-cssgradients & {
background-image: url('gradient.png');
background-repeat: repeat-x;
}
}
Having written a lot of selectors and rules like the above, I got a bit tired of that code. It’s not a complicated code at all, but it’s a bit messy, it isn’t that easy to read and maintain and I tend to forget to add the .no-js &
bit. So I thought a couple of mixins would do the job.
One mixin would write the rule-set for available features. I called it yep
. The other one, nope
, would add the fallback rule-set. We use them like so:
.my-selector {
@include yep(cssgradients) {
// …
}
@include nope(cssgradients) {
// …
}
}
That’s extremely easy, I thought. This is all the code we actually need to make those two mixins work:
@mixin yep($feature) {
.#{$feature} & {
@content;
}
}
@mixin nope($feature) {
.no-js &,
.no-#{$feature} & {
@content;
}
}
Ouch! What if we need to test for multiple features at the same time?
It isn’t as straightforward as I first thought. The yep
mixin should not produce the same kind of selectors as the nope
mixin. Take this example: we want to test for csstransforms
and opacity
and declare a specific rule-set. But if one of those features isn’t supported, we need to fall back on another rule-set.
This is the compiled CSS we are looking for:
.csstransforms.opacity .my-selector {
// …
}
.no-js .my-selector,
.no-csstransforms .my-selector,
.no-opacity .my-selector {
// …
}
One thing I strived for was to keep the code as DRY as possible using some of the newness in Sass 3.3. As I worked through the logic I found that a single mixin could handle both cases.
I created a main modernizr
mixin to handle both situations. You won’t use it directly on your Sass stylesheet, but it’s used internally by yep
and nope
. In fact, yep
and nope
are merely aliases of this more complex mixin. They only do one thing: call the modernizr
mixin with the set of features you’re passing, and set a $supports
variable you won’t need to remember.
That’s it, they’re meant to be easier to remember because they require only one parameter: $features...
, faster to write because they are shorter and make the whole thing extremely easy to read because you instantly know what the intention of the code is.
// `yep` is an alias for modernizr($features, $supports: true)
@mixin yep($features...) {
@include modernizr($features, $supports: true) {
@content;
}
}
// `nope` is an alias for modernizr($features, $supports: false)
@mixin nope($features...) {
@include modernizr($features, $supports: false) {
@content;
}
}
The modernizr
mixin expects two arguments: $features
which is our argList
, a comma-separated list of features and $supports
, a boolean which will be used to output the yep or the nope rules.
@mixin modernizr($features, $supports) {
// Sass magic
}
Inside the mixin I set three variables to handle everything we need to generate.
We need to use the no-
prefix if checking for unsupported features (e.g. .no-opacity
). If checking for supported features we need no prefix at all so we’ll use an empty string in this case:
$prefix: if($supports, '', 'no-');
To generate our feature selector (e.g. .opacity.csstransforms
or .no-opacity, .no-csstransforms
), we need two different strategies. We have to create a string if checking for supported features and we’ll concatenate the class names later on. Or create a list if checking for unsupported features. We’ll append class names later on too.
$selector: if($supports, '', unquote('.no-js'));
You’ll see that all the magic that handles this thing is done by a placeholder. We’ll need to give it a name that will look something like %yep-feature
or %nope-feature
.
$placeholder: if($supports, '%yep', '%nope');
I also set a variable $everything-okay: true
which is meant for error handling. More on this later on.
Now it’s time to create our feature selectors and our placeholder names. We’ll loop through the passed $features
to do so:
@each $feature in $features {
// …
}
Within that loop we just need three lines of code. They’re a bit heavy, but what they accomplish is quite simple:
$placeholder: $placeholder + '-' + $feature;
The resulting $placeholder
variables will look something like %yep-opacity-csstransforms
or %nope-opacity-csstransforms
$new-selector: #{'.' + $prefix + $feature};
$selector: if(
$supports,
$selector + $new-selector,
append($selector, $new-selector, comma)
);
$new-selector
will look something like .csstransforms
or .no-csstransforms
. We then concatenate $new-selector
or append it to the list (e.g. .opacity.csstransforms
or .no-opacity, .no-csstransforms
).
That’s it for generating our placeholder and selector names. Take the opacity
and csstransforms
example. This is the result of using @include yep(opacity, csstransforms)
;
@debug $placeholder; // %yep-opacity-csstransforms
@debug $selector; // .opacity.csstransforms
And this the result of using @include nope(opacity, csstransforms)
:
@debug $placeholder; // %nope-opacity-csstransforms
@debug $selector; // .no-js, .no-opacity, .no-csstransforms
It’s time to write our placeholder. We use Sass interpolation to write the name we’ve generated within the loop and then print the declaration block (@content
) we’ve passed within the yep
or nope
mixin.
#{$placeholder} & {
@content;
}
Now we’ll print our features $selector
(s) and extend the placeholder. But, there’s a little problem here, if we extend the placeholder as-is:
#{$selector} {
@extend #{$placeholder};
}
we’ll get an unexpected CSS output:
.my-selector .opacity.csstransforms .my-selector {
// …
}
We need something to fix this. Sass 3.3's @at-root directive comes to the rescue:
@at-root #{$selector} {
@extend #{$placeholder};
}
Now our features selector isn’t placed before the actual selector because @at-root
cancels the selector nesting.
@if type-of($feature) != 'string' {
$everything-okay: false;
@warn '`#{$feature}` is not a string for `modernizr`';
} @else {
// proceed …
}
Within the previous loop we’ll also check if every $feature
is a string
. As Kitty Giraudel explains in their introduction to error handling in Sass we shouldn’t let the Sass compiler fail and punch us in the face with an error. That’s why we should prevent things like 10px
or even nested lists like (opacity csstransforms), hsla
to stop our stylesheet from successfully compiling.
If a wrong parameter is passed, the compilation won’t fail, but nothing will be generated and you’ll be warned of the problem.
If $everything-okay
is still true
after we iterate through the list of features, we’re ready to generate the output code.
It all started as a small Sass experiment and ended up being an incredibly interesting challenge. I came up with a piece of code that I never thought would make me push the Sass syntax as far as I did. It was really interesting to develop a solution that uses so many different Sass features like the @at-root
directive, loops (@each
), the ampersand (&
) to reference parent selectors, the if()
function, placeholders, list manipulation, … and also stuff like mixin aliases and error handling.
That’s it, you can play with the code on SassMeister or view the documentation and download on Github. The Modernizr mixin is available as a Compass extension too.
]]>Before digging into Sass awesomeness, let’s first have a look at how we would do it in JavaScript:
var Class = function(conf) {
this.conf = extend(
{
duration: 2000,
name: 'class',
theme: 'dark',
speed: 500
},
conf || {}
)
this.init()
}
So what’s going on here? The Class
constructor is accepting a conf
parameter. Then it defines its own conf
property by merging the given object with a default configuration via the extend
function. If conf
isn’t defined, then it extends an empty object with default properties.
Extending an object based on another one is very convenient when you want to allow the user to define his own configuration but still want to provide defaults in case he doesn’t set all arguments.
One could ask what is wrong with having several arguments in the signature with a default value for each of them. Tl;dr version is that using an object is just easier and more convenient. Now if you want the detail, here are the reasons behind why an object as unique parameter instead of several parameters sounds better.
To begin with, using an object makes it easier to understand since you have to specify the key associated to each value. While slightly longer to write, it’s easier to read; a fair trade-off in my opinion.
// This…
f({
message: 'You shall not pass!',
close: false,
error: 42,
type: 'error'
})
// … is easier to understand than this
f('You shall not pass!', false, 42, 'error')
But the readibility argument is kind of a poor one. Some would say that they feel very comfortable with the multiple-arguments notation as long as they use a proper indentation for each argument (kind of like the object one) so let’s move on to something more robust.
It’s generally simpler to store an object in a variable and then to pass it to the function rather than storing each individual parameter in its own variable. While .call()
and .apply()
let you do something around this, it’s not exquisite for readability (again!).
// This…
var conf = {
message: 'You shall not pass!',
close: false,
error: 42,
type: 'error'
}
f(conf)
// … is easier to read than this
var conf = ['You shall not pass!', false, 42, 'error']
f.apply(void 0, conf)
Still not convince? Let’s move on.
Adding or removing is as easy as updating the configuration object. No need to update all the calls or change arguments order if some of theme are optional.
// Adding a parameter is simple; no need to worry about argument order
f({
message: 'You shall not pass!',
close: false,
error: 42,
type: 'error',
duration: 5000
})
// … while you have to put your required parameters before optional one in the signature
f('You shall not pass!', 42, false, 5000, 'error')
Last but not least, I think an object notation makes it simpler to provide defaults arguments with an extend
function than the multiple-arguments notation since JavaScript doesn’t support default values for arguments in the function signature (while PHP, Sass and other languages do). Because of this, using an object is definitely more elegant than multiplying ternary operators to check if arguments are defined or not.
I think we can agree on the fact that using a configuration object as a unique parameter is both better and more elegant than using a bunch of chained arguments. Now let’s move on to the core of this article: bringing this to Sass.
In a way, we don’t really need this in Sass because it already provides named arguments. Named arguments give the ability to call a function without having to specify all its parameters. You can call it specifying only the arguments you want, no matter their index in the parameter list, like this.
@mixin mixin($a: 'a', $b: 'b', $c: 'c') {
/* … */
}
@include mixin($b: 'boat');
This is pretty neat. But if like me you’d rather have a single object instead of a collection of arguments, then read on.
Sass 3.3 is bringing maps which are the exact equivalent of JavaScript objects. Now that we have maps, we can do all the cool stuff we just talked about and this is amazing. All we need is an extend
function to be able to extend a given object with an object of default parameters.
This could have been very easy to do but map-merge
already does it for us. Indeed, when merging two maps it does exactly what we want: extend one map with the other. At best, we could alias the map-merge
function with an extend
function:
@function extend($obj, $ext-obj) {
@return map-merge($obj, $ext-obj);
}
So here it is:
$default-object: (
dont: you think,
this: is awesome
);
$object: (this: is amazing);
$merge:extend($default-object, $object);
/**
* This results in
$merge: (
dont: you think,
this: is amazing
);
*/
Now what’s the point of all of this? Let’s say you have a component you call with a mixin. This mixin accepts quite a few parameters like — I don’t know — the width, the color scheme, the animation duration, maybe a name or something. They probably have some default values defined to match a common use case. Until now, you have done it like this
@mixin component($theme: light, $size: 100%, $duration: 250ms, $name: 'component', $border: true) {
.#{$name} {
width: $size;
animation: fade $duration;
@if $border {
border-top: 0.25em solid;
}
@if $theme == 'dark' {
background: #333;
color: #fefefe;
} @else if $theme == 'light' {
background: #fefefe;
color: #333;
}
}
}
// Including component
@include component(dark, $name: 'module');
This works great. It is easily readable, it does the job very well. However there is one thing that still sucks with this method: you can’t move the configuration elsewhere. Actually you can, but it will be like 5 variables which is getting a lot. Having a configuration map would be easier to move in a variable file or something.
@mixin component($conf: ()) {
// Extending the default arguments with the given object
$conf:extend(
(size: 100%, theme: dark, duration: 250ms, name: 'component', border: true),
$conf
);
// Dumping CSS
.#{map-get($conf, name)} {
width: map-get($conf, size);
animation: fade map-get($conf, duration);
$theme: map-get($conf, theme);
@if $theme == 'dark' {
background: #333;
color: #fefefe;
} @else if $theme == 'light' {
background: #fefefe;
color: #333;
}
}
}
// Including component
@include component((
theme: dark,
name: 'module'
));
Both doesn’t look much different except the core function from the object-way looks more crowded. True, but now separating the setup from the code is getting very easy. All you have to do is defining a map and pass it to the mixin. No need to move around a couple of variables which can quickly become a mess.
// In `_config.scss` along with your other setup variables
$component-conf: (
theme: light,
name: 'module'
);
// In `_component.scss`
@include component($component-conf);
There you go folks. This is definitely a more “Object” approach than the previous one and I can understand some people not liking it because it doesn’t look like we are dealing with CSS anymore.
Now if you ask me, not only does it make both the mixin signature cleaner, but it also gives you more flexibility about your code structure and this is a big deal when working on a huge project with countless components. Being able to gather configuration maps in a variables file can make a huge difference when it comes to code maintenance.
And while the mixin core is a little more crowded due to the map getters, the trade-off can be worth it in some cases.
]]>SassyCast making possible to go from any data type to any data type (or almost), it includes a way to cast a map into a list. While the function I wrote was kind of straight forward, Julien Cabanes showed me a cool little improvement to the function on Twitter. I merged his code in SassyCast 1.0.0.
The to-list
function core is pretty straightforward. If the given value is a map, we iterate over it to create a 2-dimensional list like this: ( "key-1" "value 1", "key-2" "value 20" )
.
@function to-list($value) {
@if type-of($value) == 'map' {
$keys: ();
$values: ();
@each $key, $val in $value {
$keys: append($keys, $key);
$values: append($values, $val);
}
@return zip($keys, $values);
}
@return if(type-of($value) != 'list', ($value,), $value);
}
To be a little more precise about what’s being done here: we loop through each map entry, store the key in a $keys
list and the value in a $values
list. Then we zip both to return a 2-dimensional list where the first element of each list if the former key and the second element of each list is the former value.
Does the job well.
Julien thought it would be cool to be able to keep only keys, or only values or both (what I’ve done) so he added an extra parameter to the function accepting either keys
or values
or both
. Every other value would fallback to both
.
Then depending on the flag, he returns either $keys
or $values
or a zip of both.
@function to-list($value, $keep: 'both') {
$keep: if(index('keys' 'values', $keep), $keep, 'both');
@if type-of($value) == 'map' {
$keys: ();
$values: ();
@each $key, $val in $value {
$keys: append($keys, $key);
$values: append($values, $val);
}
@if $keep == 'keys' {
@return $keys;
} @else if $keep == 'values' {
@return $values;
} @else {
@return zip($keys, $values);
}
}
@return if(type-of($value) != 'list', ($value,), $value);
}
If you don’t like conditional return statements or if you simply want to look like a badass with an unreadable ternary mess, you could return something like this:
@return if($keep == 'keys', $keys, if($keep == 'values', $values, zip($keys, $values)));
Literally:
$keep
is 'keys'
, return $keys
$keep
is 'values'
, return $values
zip($keys, $values)
Let’s try it with a little example, shall we? First, our map.
$breakpoints: (
'small': 600px,
'medium': 900px,
'large': 1200px
);
And now, we cast it to a list.
$breakpoints-list: to-list($breakpoints, 'both');
// ('small' 600px, 'medium' 900px, 'large' 1200px)
$breakpoints-keys: to-list($breakpoints, 'keys');
// ('small' 'medium' 'large')
$breakpoints-values: to-list($breakpoints, 'values');
// (600px 900px 1200px)
That’s all folks! Thanks again Julien!
]]>elements
component (we could have called it thumbs-list
or something but that doesn’t matter).
Now, this is the core of a lot of other components. For instance, we have a component featuring top products, with the same list of items but in a better wrapper with a heading, a “see more” link, a large left border for some visual impact and stuff. It’s just an example but the elements
component is used in at least 3 to 4 other components of our architecture.
Until now, no big deal. Au contraire, it looks pretty nice! DRY code, component-based architecture. Nothing but the best so let’s move on.
We also have a couple of different layouts:
And now, here is the issue: depending on the component and the layout, we want to control the number of items to be displayed on a single row. For instance, in a one-column layout, we could spread to 6 items per row. 4 or 5 in a two-columns layout. 3 in the three-columns layout.
And all this has nothing to do with responsive design, yet. So you can imagine what a nightmare it can be when you have to make this component adapt not only to its context but to the screen size, from 300px to 1200px.
Media queries are not a solution. At least not in this case. Media queries are great when we want to adapt the layout to the screen size. This is where they really kick off. But that’s not what we want. I mean, first we want to be able to make our component work great in all situations at a single screen width, we’ll see responsive issues after.
And when switching from 1 to 2 to 3 columns, the viewport’s width has absolutely no impact on anything. It’s always the same. We don’t give a shit about the viewport’s size at the moment, we need to know how much space is available for the component depending on the layout used (and in a lesser extend the meta-component used).
Element queries are not part of any CSS Specification. They basically do not exist as of today. There are a couple of JavaScript-based polyfills involving various syntaxes, but there is still no draft for a native support.
Yet, element queries would be so much better than media queries. The more I think about it, the more I fell like we almost wouldn’t need media queries if we had element queries. Working a site/application as a collection of components you put together like LEGOs not only makes more sense but also allows you to handle responsive design at a module level instead of a macro overview.
That’s why I’ve decided to give a serious go at element queries at work. I came across quite a couple of versions, all of them looking real good:
I decided to set up on the last one which looks slightly better than the others. Also I like Sam Richards, that’s enough for me. Anyway, all we have to do to make it work — aside from including the script — is adding a data-eq-pts
attribute to the component, listing breakpoints as a map.
<ul
class="component"
data-eq-pts="small: 300, medium: 500, large: 700, huge: 900"
>
<!-- … -->
</ul>
Then when a min-width is matched, the element can be selected using an attribute selector data-eq-state
matching the mapped keyword. For instance .component[data-eq-state="small"]
when the component is between 300 and 499px wide.
I have designed a little test case (you might want to test it on CodePen directly and resize the screen):
See the Pen cfdf5410e622f1e5f41035232de4260c by Kitty Giraudel (@KittyGiraudel) on CodePen.
The first collection (top) is the 1-column layout, the second one (middle) is when we got a sidebar and the last one (bottom) is when we got both the filter bar and the sidebar. As you can see, the number of elements per row adapts to the available width to the component (not the screen size).
I truely believe future of responsive web design lay somewhere around Element Queries. They are not just convenient, they are essential to build a DRY and maintainable architecture.
However, they still come up with a couple of pitfalls like infinite loops and non-sense declarations. Imagine you tell a component to have a width of 399px when it is 400+ pixels large. This is brainfuck. This is probably because of such things element queries are still not natively implemented anywhere.
But I hope we might get to it. Some day.
]]>To please Joey Hoer’s request for SassyLists, I have built a little walk
function. The idea is the same as for the array_walk
function from PHP if you’re familiar.
array_walk — Apply a user function to every member of an array
So whenever you have a list of values and want to apply a given function to each of them, you either need to write a loop in order to do this manually, or you need a walk
function. Luckily for you, I’ve written one and looking back at my code I feel like it’s interesting enough to write about it: call
, set-nth
and function-exists
functions, argList
, nothing but the good.
Pretty much like the array_walk
function actually. Here is the syntax:
walk(list $list, function $function, argList $args...)
The first argument is the list you are walking through. The second argument is the function you want to call to each item from the list. Any argument after those 2 are optional and will be passed as extra argument to the function call.
This is why we add ...
to the $args
parameter; because it is an argList
. To put it simple: all arguments passed to the function (as many as you want) starting from the index of $args
will be packaged as a list. Then, you can access them like regular list item with nth()
for instance.
For example let’s say you have a list of colors you want to revert, in order to get complementary colors.
$colors: hotpink deepskyblue firebrick;
$complementary-colors: walk($colors, complementary);
// #69ffb4 #ff4000 #22b2b2
As you can see, this is pretty straight-forward. The first argument is the list of colors ($colors
) and the second argument is the name of the function you want to apply to each item from the list.
Now let’s move on to something slightly more complicated, with an extra parameter. Shall we? Instead of finding the complementary color of each item from the list, let’s lighten all those colors.
$colors: hotpink deepskyblue firebrick;
$complementary-colors: walk($colors, lighten, 20%);
// #ffcfe7 #66d9ff #e05a5a
Not much harder, is it? The second argument is still the function, and we pass a 3rd argument to the function: the percentage for the lighten
function. This value will be passed as a 2nd argument to the lighten
function, the first being the color of course.
Okay, let’s move on to the code now. Surprisingly enough, the function core is extremely short and simple. Actually, the call
function is doing all the job.
@function walk($list, $function, $args...) {
@for $i from 1 through length($list) {
$list: set-nth($list, $i, call($function, nth($list, $i), $args...));
}
@return $list;
}
Let’s have a little recap about both call
and set-nth
so you can fully understand what’s going on here. First, set-nth
is a function added in Sass 3.3, aiming at updating a specific value from a list. The first argument is the list, the second is the index to be updated and the third the new value.
I intentionally choosed to use set-nth()
here and not to build a new list from scratch because I feel like it makes more sense: we are not creating a new list, we are simply updating values. Also I think it’s faster but I’m not quite sure about that.
Regarding call
, I’ve already written quite a couple of times about it. It does exactly what you are expecting it to do: call the function named after the first argument, passing it all the other arguments in the same order. This is quite cool when you want to dynamically call a function by its name, like we are doing right now.
Back to our function now, here it what’s going on: we loop through the list and update each value with what is being returned by the call
function. If we take back the last exemple we’ve worked with, here is what happen step by step:
$list
at index 1
with the result of call(hotpink, lighten, 20%)
(==lighten(hotpink, 20%
)$list
at index 2
with the result of call(deepskyblue, lighten, 20%)
(==lighten(deepskyblue, 20%
)$list
at index 3
with the result of call(firebrick, lighten, 20%)
(==lighten(firebrick, 20%
)$list
Simple, isn’t it?
The main problem I can see with this function is you can’t really make sure everything’s okay. For instance, there is absolutely no way to know the number of arguments expected by $function
. If it’s complementary
, then it’s 1; if it’s lighten
, it needs 2; if it’s rgba
, it’s 4, and so on… It really depends on the function name passed.
Also, we can’t make sure values from $list
are valid for $function
. What if you try to to-upper-case
a list of numbers? It won’t work! Although we can’t make this check.
In the end, the only things we can check is whether or not the function exists thanks to function-exists
:
@function walk($list, $function, $args...) {
@if not function-exists($function) {
@warn "There is no `#{$function}` function.";
@return false;
}
/* Function core … */
}
Thanks to the new function-exists
from Sass 3.3, we can test whether or not a function exists. In our case, we test if $function
refers to an existing function. If it doesn’t, we warn the user and return false.
There is not much we can do aside of that. It’s the responsibility of each function to make the correct input validations so it doesn’t crash.
With such a simple function we can see how much Sass 3.3 brought to Sass. In about 5 lines of SCSS, we have used not less than 3 new functions from Sass 3.3: function-exists
, set-nth
and call
. How cool is that?
Regarding the function in itself now, I think it might be used by some frameworks. I don’t have any use case coming up at the top of my head right now, but being able to walk through an array is actually more useful than we first think.
By the way, you play with the code on SassMeister:
Play with this gist on SassMeister.
If you think of anything about the code, be sure to have a word my friends. :)
]]>Anyway, at some point someone asked a very interesting question about Sass:
I’m enjoying learning Sass, but one of those things I can’t wrap my head around is use cases for lists. What would you stuff in a Sass list?
I can see why this nice folk came up with such a question. When you’ve been used to vanilla CSS for years, you hardly can see the use case for a Sass list. I’ll try to enlight the path people!
Let’s start with a quick reminder. First of all, the list
data type isn’t specific to Sass. Actually CSS has been using lists for ages! Doubt it? Consider the following CSS rules:
.element,
.other-element {
font-family: 'Arial', 'Helvetica', sans-serif;
padding: 10px 5px 15px 0;
margin: 1em 0.5em;
background: url('my/awesome/image.png') 0 0 #666;
border: 1px solid silver;
box-shadow: 0 0.5em 0.25em -0.5em rgba(0, 0, 0, 0.1);
}
All these properties (and many more) use lists as values. To my knowledge, only font-family
uses a comma-separated list though. Anyway, most of them are shorthands for multiple properties, so that’s not surprising but still. Even the selector itself is a (comma-separated) list!
Lists have been around for a long time now, we just didn’t call them “lists” because we didn’t have to. Now, Sass officially uses the word “list” as a data type, but that doesn’t mean Sass introduced lists to the CSS background.
Note: by the way, if you haven’t read my article about Sass lists, I suggest you do.
I believe what we’ve just seen in the first section is a valid answer for the question. Since CSS supports lists for some values, why wouldn’t Sass? But you might want to have a deeper answer I suppose. Actually a Sass list hasn’t much point by itself. However it’s getting pretty cool when you can iterate over it with a loop. Thankfully Sass provides three types of loop: @for
, @each
and @while
.
Let me try with a practical example: at work, we display an image background directly related to the post-code the user is being geolocated in. For instance, I live in Grenoble, France of which the post-code is 38000, shortened as 38. Then, I got served a background image called background-38.jpg
. To avoid doing this manually for all post-codes, we use a list.
$zips: 07, 26, 38, 69, 'unknown';
// 1. `.zipcode-*` class on body
// 2. Header only
// 3. Home page
@each $zip in $zips {
.zipcode-#{$zip} {
// 1
.header {
// 2
background-image: url('../bundles/images/backgrounds/#{$zip}-small.jpg');
}
&.home {
// 3
background-image: url('../bundles/images/backgrounds/#{$zip}-large.jpg');
}
}
}
Thanks to the $zips
list and the @each
loop, we can make the whole process of assigning a specific background image depending on a class very simple. Also it gets damn simple to add/remove a zip-code: all we have to do is updating the list.
Okay. I believe this is a decent use case for a list. Now what about lists functions like append
or length
? Finding a good example is getting tricky, but I suppose we could take the one I recently talked about in this article about star rating widget in Sass where I build a selector out of a Sass list.
@for $i from 1 to 5 {
$selector: ();
@for $j from 1 through $i {
$selector: append(
$selector,
unquote("[data-rating^='#{$i}'] .star-#{$j}"),
comma
);
}
#{$selector} {
// CSS rules
}
}
The code might be complex to understand so I suggest you read the related article. For instance, when $i
is 4, the generated $selector
would be:
[data-rate^='4'] .star-1, [data-rate^='4'] .star-2, [data-rate^='4'] .star-3, [data-rate^='4'] .star-4 { … }
Anyway, this is a valid usecase for append
even if you could have worked around the problem using @extend
.
Another use case would be building a CSS gradient from a Sass list of colors. I have an article ready about this; SitePoint will release it in the next few weeks. By the way, I provide another example for lists in my article about making a Sass component in 10 minutes at SitePoint where I use one to store various message types (alert, danger, info…) as well as a base color (orange, red, blue…). Probably one of my best write-up so far, be sure to have a look.
In most projects, Sass lists are not a game changer. They can be useful if properly used, but you can always do without. Now if you ask me, they are one of the most interesting feature in the whole language. Lists are arrays, and arrays are part of the core of any language. Once you get arrays and loops, you can do absolutely tons of stuff. However most of them won’t be used in the average CSS project.
Long story short: lists are awesome, folks.
]]>@extend
doesn’t work whenever you’re in a @media
block.
The trick was to wrap the placeholder extension in a mixin. This mixin accepts a single boolean, defining if it should extend the placeholder or include the mixin’s content as a regular mixin would do. Here is a short example:
@mixin clearfix($extend: true) {
@if $extend {
@extend %clearfix;
} @else {
overflow: hidden;
}
}
%clear {
@include clearfix($extend: false);
}
For more informations about this technique and to understand this post, I suggest you read the article. Don’t worry, I’ll be there. I’ll wait, go ahead.
All good? Fine. This morning, Matt Stow suggested a new version where we wouldn’t have to create a mixin for every placeholder we want to have. Instead, we would have a single mixin — let’s call it extend()
— asking for a placeholder’s name, and extending it or including the mixin’s content as we did yesterday.
You can fin Matt’s demo on SassMeister. It looks about this:
@mixin extend($placeholder, $extend: true) {
@if $extend {
@extend %#{$placeholder};
} @else {
@if $placeholder == clearfix {
overflow: hidden;
} @else if $placeholder == hide-text {
overflow: hidden;
text-indent: 100%;
white-space: nowrap;
}
/* … any other placeholders you want … */
@else {
@warn "`#{$placeholder}` doesn’t exist.";
}
}
}
%clearfix {
@include extend(clearfix, $extend: false);
}
%hide-text {
@include extend(hide-text, $extend: false);
}
This technique is great if you want to reduce the number of mixins. Indeed, you have only one extend()
mixin, and all the placeholders you want. When you create a placeholder, all you have to do is adding its core content in the mixin by adding a @else if ($class == my-placeholder)
clause.
However it can quickly become very messy when you have a lot of placeholders to deal with. I can see the extend()
mixin’s core being dozens of lines long which is probably not a good idea. Also, I don’t like having a lot of conditional statements, especially since Sass doesn’t and won’t ever provide a @switch
directive.
That being said, I liked Matt’s idea so I tried to push things even further! To prevent from having a succession of conditional directives, we need a loop. And to use a loop, we need either a list or a map.
What’s cool with CSS declarations is they look like keys/values from a map. I think you can see where this is going.
My idea was to move all the mixin’s core to a configuration map so it only deals with logical stuff. Let me explain with an example; what if we had a map like this:
$placeholders-map: (
clearfix: (
overflow: hidden,
),
hide-text: (
overflow: hidden,
text-indent: 100%,
white-space: nowrap,
),
);
We have a top-level map called $placeholders-map
. Each key from the map is the name of a placeholder (e.g. clearfix
). The value bound to a key is a map as well. Those inner maps are basically CSS declarations. There can be as many as we want.
Now that we have a map to loop through, we can slightly rethink Matt’s work:
@mixin extend($placeholder, $extend: true) {
$content: map-get($placeholders-map, $placeholder);
// If the key doesn’t exist in map,
// Do nothing and warn the user
@if $content == null {
@warn "`#{$class}` doesn’t exist in $extend-map.";
}
// If $extend is set to true (most cases)
// Extend the placeholder
@else if $extend == true {
@extend %#{$placeholder};
}
// If $extend is set to false
// Include placeholder’s content directly
@else {
@each $property, $value in $content {
#{$property}: $value;
}
}
}
First, we retreive placeholder’s content from $placeholders-map
with map-get($placeholders-map, $placeholder)
. If the name doesn’t exist as a key in the map (null
) , we do nothing but warn the developer:
If the placeholder’s name has been found and $extend
is set to true
, then we extend the actual Sass placeholder. Else if $extend
is false
, we dump the placeholder’s content from within the mixin. To do so, we loop through the inner map of declarations. Simple and comfy.
Last but not least, let’s not forget to create our Sass placeholders! And this is where there is a huge improvement compared to Matt’s version: since we have a map, we can loop through the map, to generate the placeholders. We don’t have to do it by hand!
// Looping through `$placeholders-map`
// Instanciating a placeholder everytime
// With $extend set to false so it dumps
// mixin’s core in the placeholder’s content
@each $placeholder, $content in $placeholders-map {
%#{$placeholder} {
@include extend($placeholder, $extend: false);
}
}
Done.
You can have a look at the fully commented code here on SassMeister:
Play with this gist on SassMeister.
While the code does the job well, I am not sure how I feel about this. To be perfectly honest with you people, I think I’d rather use the version from yesterday’s article (which I already do at work) and this for two reasons.
First, there is a big problem with this version: since we are relying on the fact CSS declarations can be stored as keys/values in a Sass map, it makes it impossible to use nesting (including &
), inner mixins, or @extend
in the mixin core. Thus, it does the job for simple placeholders as we’ve seen in our demo, but wouldn’t work for more complex pieces of code.
Secondly, I don’t like storing CSS declarations in a map, no matter how clever it is. In the end, I feel like it adds too much code complexity. Someone once told me it’s like a preprocessor in a preprocessor. I don’t think it’s worth the pain.
That being said, it’s pretty cool as experimental stuff. Playing around Sass' syntax has always been one of the things I love the most about this preprocessor. Hence this blog post, and the pretty crazy demo. Anyway, I hope you liked it, and thanks Matt!
]]>Anyway, I was looking at the code and to my surprise, Ken was mostly using mixins for common patterns, even when there was no variable involved whatsoever. You probably know it’s considered bad practice to use a mixin when you don’t need to make your styles varying according to passed arguments. Placeholders are best suited for such a thing. More informations on topic in this article at SitePoint.
So I opened an issue to prompt Ken to move away from mixins when there is no need for them, in favor of placeholders and while he was completely willing to do so, he was worried about usage in media queries. Let’s pause here for some explanations.
This is something I covered before in this article about @extend
at SitePoint but I’ll sum up here so you can follow along if you’re not very comfortable with Sass yet.
When extending a selector, Sass doesn’t take the CSS content from the extended selector to put it in the extending one. It works the other way around. It takes the extending selector and append it to the extended one. This is the reason why extending placeholders is better for final output than including mixins.
Because extending takes the current selector to move it to the extended selector, it makes it impossible to use it from different scopes. For instance, you can’t extend a placeholder that has been declared in a @media
block, nor can you extend a placeholder from root if you’re within a @media
directive.
And this is a huge issue. Fortunately, this has to be the most expected feature request from Sass (according to the outrageous number of issues mentioning this on their repo: #501, #640, #915, #1050, #1083). At this point, we believe Sass maintainers will find a way to allow cross-scope extending.
Meanwhile, this is why Ken didn’t use placeholders and stuck to mixins. However from my experience, it’s not very common to have to include a mixin/extend a placeholder at a very specific breakpoint and not the others. Usually, rules scoped into mixins/placeholders are the core of the element they are applied to, meaning they should be there in all circumstancies. So I decided to find a solution.
See what I did? With the title? “Mixin”… Because it’s like… Nevermind. I opened a SassMeister gist and started playing around to see if I could come up with a solution. First of all, what I ended up with is not unique. People have done it before me; and I remember seeing frameworks using it already.
My idea was the following: extend the placeholder when possible, else include the mixin. Also, I didn’t want to have code duplicates. Whenever I need to make a change in the code, I don’t want to edit both the placeholder and the mixin. There should be only a single place where the code lies.
For our example, let’s consider a basic need: a micro-clearfix hack mixin. Here is how I decided to tackle things:
@mixin clear($extend: true) {
@if $extend {
@extend %clear;
} @else {
&:after {
content: '';
display: table;
clear: both;
}
}
}
%clear {
@include clear($extend: false);
}
Okay, that looks nasty. Here is what we do: first we define the clear
mixin. The only parameter from the signature is $extend
, which is a boolean set to true
per default.
Then in the mixin core, we check whether or not $extend
is set to true
. If it is, then we extend the placeholder. If it is not, we dump the CSS code as a regular mixin would do.
Out of the mixin, we define the placeholder %clear
. To avoid repeating the CSS code in the placeholder, we only have to include the mixin by setting $extend
to false. This will dump the CSS code in the placeholder’s core.
Here is a boilerplate to code your own:
@mixin myMixin($extend: true) {
@if $extend {
@extend %myMixin;
} @else {
// Mixin core
}
}
%myMixin {
@include myMixin($extend: false);
}
There it is. Now let’s try it:
.a {
@include clear;
}
.b {
@include clear;
}
This will result in the following CSS output:
.a:after,
.b:after {
content: '';
display: table;
clear: both;
}
Until now, quite nice isn’t it? Even if we are using a mixin, we have the behaviour of a placeholder since both selectors get merged into a single one, like extending a placeholder would do.
Now let’s imagine we need to have a clear fix at a certain breakpoint:
@media (min-width: 48em) {
.c {
@include clear;
}
}
This will throw an error:
You may not @extend an outer selector from within @media.
You may only @extend selectors within the same directive.
From "@extend %clear" on line 3.
This is exactly the issue we are trying to work around. Now, thanks to the way we wrote our mixin, we only have to move $extend
to false
in order to make it work:
@media (min-width: 48em) {
.c {
@include clear(false);
}
}
No more error! The code is being output as usual because in this case, we are not extending a placeholder anymore (which would produce an error) but actually dumping CSS rules like a regular mixin.
It’s a shame we have to hack around the syntax in order to get the best from Sass placeholders. Hopefully cross-scope extending will save us from doing such nasty things whenever it comes live.
In any case, this looks like a robust way to get the most from both mixins and placeholders. Hope you like it people!
]]>A while back, I wanted to create a function to calculate the Levenshtein distance between two strings. The Levenshtein distance is the number of manipulations you need to do to string A in order to have string B. If you want Wikipedia’s definition, here it is:
In information theory and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertion, deletion, substitution) required to change one word into the other.
If you wonder whether I succeeded or failed, I succeeded. You can play with the code directly on SassMeister. So if you ever wanted to calculate the Levenshtein distance between two strings in Sass, now you can. Useless thus essential.
Now back to our main topic: I needed matrices. A matrix is basically a two-dimensional array (or list). For example this is a Sass matrix:
$matrix: ( (0 1 2 3) (1 0 0 0) (2 0 0 0) (3 0 0 0));
Well this was pretty easy. Now what if we want to dynamically create a matrix? Update values? Retreive values? And more stuff? This is getting harder. So I created a couple of functions to ease the pain.
JavaScript allows you to instanciate a new array of n
cells. This makes creating empty matrices quite easy, you only need a single for-loop like this:
var matrix = new Array(9)
for (var i = 0; i < matrix.length; i++) {
matrix[i] = new Array(9)
}
This would be enough to create an empty matrix of 9x9 with all cells filled with undefined
. In Sass, you cannot create a new list of n
cell. If you do $list: (9)
, you are basically assigning the number 9
to the $list
variable which is not what you want.
Thus I found out it’s much easier to simply instanciate a new list with dummy values to be updated later than creating a matrix with definitive value right away. Let’s do that shall we?
@function matrix($x, $y: $x) {
$matrix: ();
@for $i from 1 through $x {
$tmp: ();
@for $j from 1 through $y {
$tmp: append($tmp, 0); // 0 is the filler value
}
$matrix: append($matrix, $tmp);
}
@return $matrix;
}
See how we make the $y
parameter optional by defaulting it to $x
? It makes instanciating squared matrices easier: matrix(5)
. Little things matter. ;)
Being able to instanciate an empty matrix is cool but being able to fill it with real values is even better! What if we had a set-entry
function setting given value at given position on given matrix?
@function set-entry($matrix, $coords, $value) {
$x: nth($coords, 1);
$y: nth($coords, 2);
$matrix: set-nth(set-nth(nth($matrix, $x), $y, $value), $x, $matrix);
@return $matrix;
}
We could have requested two distinct parameters for $x
and $y
but I feel like it’s better asking for a 2-items long list ($x $y)
. It keeps the signature cleaner and makes more sense to me. However we need to make sure $coords
is actually a 2-items long list of coordinates, so why don’t we make a little helper for this?
@function _valid-coords($coords) {
@if length($coords) != 2 or type-of(nth($coords, 1)) != number or type-of(nth($coords, 2)) != number {
@return false;
}
@return true;
}
Note: I like to prefix private functions with an underscore. By “private” I mean functions that are not supposed to be called from the outside. Unfortunately Sass doesn’t provide any way to privatize stuff.
All we did was checking for the length and the type. This doesn’t deal with out of bounds coordinates but that’s more than enough for now. Anyway, to set a value in the grid it is nothing easier than:
$matrix: set-entry($matrix, (1 1), 42);
What is also pretty cool is you can use negative indexes to start from the end of columns/rows. So to fill the last entry from the last row of the grid, you’d do something like set-entry($matrix, (-1 -1), 42)
.
Now that we are able to easily set values in the grid, we need a way to retrieve those values! Let’s build a get-entry
function working exactly like the one we just did.
@function get-entry($matrix, $coords) {
@if not _valid-coords($coords) {
@warn "Invalid coords `#{$coords}` for `get-entry`.";
@return false;
}
@return nth(nth($matrix, nth($coords, 1)), nth($coords, 2));
}
See how we check for coordinates validity with our brand new helper? I don’t know for you, but I think it looks pretty neat! Anyway, to retrieve a value at position (x y), all we have to do is:
$value: get-entry($matrix, (1 1)); // 42
What I always found difficult when working with matrices (no matter the language) is actually seeing what’s going on. I need a visual representation of the grid to understand what I am doing and whether I’m doing it properly. Unfortunately my debug function from SassyLists isn’t quite suited for such a case but the main idea is the same. I just had to revamp it a little bit.
@function display($matrix) {
$str: '';
@each $line in $matrix {
$tmp: '';
@each $item in $line {
$tmp: $tmp + ' ' + $item;
}
$str: $str + $tmp + '\A ';
}
@return $str;
}
This function returns a string like this: " 0 0 0\A 0 0 0\A 0 0 0\A "
. As is, it is not very useful but when you couple it with generated content and white-space wrapping, you got something like this:
0 0 0
0 0 0
0 0 0
… which is pretty nice. Basically I used the mixin from SassyLists which takes a string and displays it in the body pseudo-element with white-space: pre-wrap
, allowing for line breaks.
@mixin display($matrix, $pseudo: before) {
body:#{$pseudo} {
content: display($matrix) !important;
display: block !important;
margin: 1em !important;
padding: 0.5em !important;
background: #efefef !important;
border: 1px solid #ddd !important;
border-radius: 0.2em !important;
color: #333 !important;
font: 1.5em/1.5 'Courier New', monospace !important;
text-shadow: 0 1px white !important;
white-space: pre-wrap !important;
}
}
Since there are two pseudo-elements (::after
and ::before
), you can watch for 2 matrices at the same time. Pretty convenient when working on complicated stuff or debugging a matrix.
So far we managed to initialize a matrix, set values in it, retreive those values and display the whole thing as a two dimensional grid directly from CSS. This is quite a lot for a first roll with matrices don’t you think?
But what if we want to push things further? While I am not ace with matrices (I never really did extremely well in math), I know someone who is: Ana Tudor. You may be familiar with some of her crazy experiments from CodePen. Anyway, Ana is most certainly a brainiac so she gave me plenty of ideas of functions to ease the pain of having to deal with matrices!
Among other things, there are a couple of functions to swap values and collection of values of position:
swap-entries($matrix, $e1, $e2)
: swaps values $e1
and $e2
from $matrix
swap-rows($matrix, $r1, $r2)
: swaps rows $r1
and $r2
from $matrix
swap-columns($matrix, $c1, $c2)
: swaps columns $c1
and $c2
from $matrix
Some functions to know additional informations on the current matrix:
columns($matrix)
: return number of columns in $matrix
rows($matrix)
: return number of rows in $matrix
is-square($matrix)
: check wether $matrix
has as many rows as columnsis-diagonal($matrix)
: check wether all values from the main diagonal of $matrix
are set while all other values are equal to 0is-upper-triangular($matrix, $flag: null)
: check wether all value below $matrix
diagonal are equal to 0is-lower-triangular($matrix, $flag: null)
: check wether all value above $matrix
diagonal are equal to 0… and much more. And because I needed a place to store all those functions I made a GitHub repository so if you feel like contributing, be sure to have a glance!
Also, there is a Compass extension for SassyMatrix now:
gem install SassyMatrix
require 'SassyMatrix'
in config.rb
@import "SassyMatrix"
in your stylesheetAlso, you can play with SassyMatrix directly at SassMeister, so be sure to give it a try. Plus, I’d love to have some feedbacks!
]]>This is the 3rd part of the Git Tips & Tricks series from Loïc Giraudel. If you missed the first post and the the second one, be sure to give them a read! And now roll up your sleeves, because this is getting wicked!
Hi people ! Welcome to the third part of this Git Tips & Tricks series ! This week I’m going to start with 2 useful tricks to fix conflicts or just see diff in a graphical tool instead of command line. Then we’ll explore the magic of the Git bisect
command. Finally I will show how to merge commits into a single one before pushing it. What do you think? Let’s go?
Whenever you face a merge conflict, you can use a merge tool to resolve it without too much headache. Just use the git mergetool
command, it will ask you which tool to use.
Like git mergetool
to resolve merge conflicts, there is a git difftool
to see diff results in a graphical tool. Unfortunately, git difftool
opens files sequentially: after checking a file, you have to close the diff tool so Git can reopen it with the next file.
Fortunately since version 1.7.11, Git allows to see diff on a whole directory with the --dir-diff
parameter. If you are using an older version, worry not! It’s possible to install a small script to do the same thing:
/home/workspace $ git clone git@github.com:wmanley/git-meld.git
Cloning into git-meld...
remote: Counting objects: 64, done.
remote: Compressing objects: 100% (34/34), done.
remote: Total 64 (delta 31), reused 57 (delta 25)
Receiving objects: 100% (64/64), 17.83 KiB, done.
Resolving deltas: 100% (31/31), done.
Then, create a new alias meld in Git, for example by adding the following line in the [alias] part of you .git/config file:
meld = !/home/workspace/git-meld/git-meld.pl
Now, you just have to use git meld
command for your diff:
$ git meld HEAD HEAD~4
$ git meld myBranch myOtherBranch
This command will ask you which diff tool to use, then open the whole directory in the tool instead of each file sequencially.
When a new bug appears in your application, the best way to fix the bug is to find which commit introduced it. Git has an awesome method to find a specific commit with a dichotomic search solution.
In computer science, a dichotomic search is a search algorithm that operates by selecting between two distinct alternatives (dichotomies) at each step. It is a specific type of divide and conquer algorithm. A well-known example is binary search.
— Wikipedia - Dichotomic Search
The magic Git command is git bisect
. This command requires 2 commits SHA1 (or references) to work: an old commit where the bug is not there and a recent commit where the bug is there. The command will checkout the commit in the middle of the interval of the two commits.
Once checkout of the middle commit has been done, user has to test if the bug is still there or not and inform git bisect
command. According to user answer, git bisect
will checkout a commit in the middle of the first or the second half of the initial interval.
Then the user has to check the bug again and inform git bisect
. At each step of the process, git bisect
reduce the interval and finally returns the SHA1 of the commit which has introduced the bug.
Let’s take an example. I’m going to create 20 commits; each commit adding a new line “line number #” in file.txt. One of the insertions will have a typing error "numer" instead of "number". We are going to try to find the commit which has the typo with git bisect
.
$ # I create 20 commits here
$ cat file.txt | grep number | wc -l
19
$ cat file.txt | grep numer | wc -l
1
Ok, I have 19 occurences of "number" and 1 occurrence of "numer", let’s find which commit inserted the typo. To do so, I run git bisect
with two commits references. I know that the bug was not there 20 commits ago and is present now. So I can pass HEAD
and HEAD~20
for my two references.
$ git bisect start HEAD HEAD~20
Bisecting: 9 revisions left to test after this (roughly 3 steps)
[2128ffe8f612d40bc15b617600b6de5f5231d58e] Commit 10
Git checks my interval and calculates that I will need 3 steps to find the wrong commit after current step. The commit in the middle of my interval has been checkout ("Commit 10"). If I look at my master branch in Gitg (or Gitk, Gitx or any Git graphical tool…), I can see that Git has created two references refs/bisect/start and refs/bisect/good-[…] next to my HEAD
and HEAD~20
commits.
Note: It’s possible to use git bisect visualize
or git bisect view
to see the remaining interval in graphical tool. For a console view, you can use git bisect view --stat
.
Now I have to check if the bug is still there or not and inform Git according to my check.
$ cat file.txt | grep numer | wc -l
1
The bug is still there, so I use git bisect bad
to tell Git bisect that the current state is still broken.
$ git bisect bad
Bisecting: 4 revisions left to test after this (roughly 2 steps)
[2c935028965bd60a8fe15d428feb1f3972245e75] Commit 5
Git bisect has reduced the commit interval and checkout the “Commit 5”. I will find the typo bug in 2 steps from now. In gitg, my master branch looks like this:
The refs/bisect/bad reference has been moved to the “Commit 10”. I check if the bug is still there or not.
$ cat file.txt | grep numer | wc -l
1
$ git bisect bad
Bisecting: 2 revisions left to test after this (roughly 1 step)
[7ab0afc851dc3cdd1bee795b6bc0656d57497ca5] Commit 2
Now Gitg show this:
$ cat file.txt | grep numer | wc -l
0
$ git bisect good
Bisecting: 0 revisions left to test after this (roughly 1 step)
[a21e6e97e003b614793cffccbdc1a53985fc11d4] Commit 4
The bug wasn’t there in this step, so I use git bisect good
instead of git bisect bad
. Gitg has created a new refs/bisect/good-[…] reference.
$ cat file.txt | grep numer | wc -l
1
$ git bisect bad
Bisecting: 0 revisions left to test after this (roughly 0 steps)
[7ae5192025b3a96520ee4897bd411ee7c9d0828f] Commit 3
$ cat file.txt | grep numer | wc -l
1
$ git bisect bad
7ae5192025b3a96520ee4897bd411ee7c9d0828f is the first bad commit
commit 7ae5192025b3a96520ee4897bd411ee7c9d0828f
Author: lgiraudel <lgiraudel@mydomain.com>
Commit 3
:100644 100644 d133004b66122208e5a1841e01b77db5862548c0 cd8061d8bb277cb08d8965487ff263181a82e2e4 M file.txt
Finally, Git bisect gives me the guilty commit. Let’s check its content:
$ git log -1 -p
commit 7ae5192025b3a96520ee4897bd411ee7c9d0828f
Author: lgiraudel <lgiraudel@mydomain.com>
Commit 3
diff --git file.txt file.txt
index d133004..cd8061d 100644
-- - file.txt
+++ file.txt
@@ -1,2 +1,3 @@
line number 1
line number 2
+line numer 3
Now that I have found the commit which has introduced the typo, I can read its content to find how to fix my bug. Once the bisect is finished, I can use git bisect reset
to go back to the HEAD and clean references in my branch. This command can be used in the middle of a bisect process to stop it.
Sometimes, it’s not possible to check if a bug is still present on a specific commit. In this case, instead of using git bisect good
or git bisect bad
commands, you can use git bisect skip
to ask a commit near the current one.
$ git bisect start HEAD HEAD~20
Bisecting: 9 revisions left to test after this (roughly 3 steps)
[2128ffe8f612d40bc15b617600b6de5f5231d58e] Commit 10
$ cat file.txt | grep numer | wc -l
1
$ git bisect bad
Bisecting: 4 revisions left to test after this (roughly 2 steps)
[2c935028965bd60a8fe15d428feb1f3972245e75] Commit 5
$ git bisect skip
Bisecting: 4 revisions left to test after this (roughly 2 steps)
[7ae5192025b3a96520ee4897bd411ee7c9d0828f] Commit 3
$ cat file.txt | grep numer | wc -l
1
$ git bisect bad
Bisecting: 0 revisions left to test after this (roughly 1 step)
[7ab0afc851dc3cdd1bee795b6bc0656d57497ca5] Commit 2
$ cat file.txt | grep numer | wc -l
0
$ git bisect good
7ae5192025b3a96520ee4897bd411ee7c9d0828f is the first bad commit
commit 7ae5192025b3a96520ee4897bd411ee7c9d0828f
Author: lgiraudel <lgiraudel@mydomain.com>
Commit 3
:100644 100644 d133004b66122208e5a1841e01b77db5862548c0 cd8061d8bb277cb08d8965487ff263181a82e2e4
Of course, if you skip the last steps of the bisect process, Git won’t be able to tell you which commit has introduced the bug and will return a commit range instead of a commit.
If you want to avoid testing manually each step of the bisect process, you can use a test script to do it for you. Of course, it’s not always possible and sometimes you’ll spend more time creating the test than running the bisect manually. The script must return 0 if the code is good or 1 if the code is bad.
The test script is really easy to write for our usecase. For real usecases, it usually requires to use a testing techno like test unit frameworks, BDD frameworks or sanity frameworks.
#/bin/sh
exit `cat file.txt | grep numer | wc -l`
Now, let’s just launch git bisect
with the script:
$ git bisect start HEAD HEAD~20
Bisecting: 9 revisions left to test after this (roughly 3 steps)
$ git bisect run ./bisect_auto.sh
running ./bisect_auto.sh
Bisecting: 4 revisions left to test after this (roughly 2 steps)
[2c935028965bd60a8fe15d428feb1f3972245e75] Commit 5
running ./bisect_auto.sh
Bisecting: 2 revisions left to test after this (roughly 1 step)
[7ab0afc851dc3cdd1bee795b6bc0656d57497ca5] Commit 2
running ./bisect_auto.sh
Bisecting: 0 revisions left to test after this (roughly 1 step)
[a21e6e97e003b614793cffccbdc1a53985fc11d4] Commit 4
running ./bisect_auto.sh
Bisecting: 0 revisions left to test after this (roughly 0 steps)
[7ae5192025b3a96520ee4897bd411ee7c9d0828f] Commit 3
running ./bisect_auto.sh
7ae5192025b3a96520ee4897bd411ee7c9d0828f is the first bad commit
commit 7ae5192025b3a96520ee4897bd411ee7c9d0828f
Author: lgiraudel <lgiraudel@mydomain.com>
Commit 3
:100644 100644 d133004b66122208e5a1841e01b77db5862548c0 cd8061d8bb277cb08d8965487ff263181a82e2e4 M file.txt
bisect run success
If you are working on a big task, it’s a good thing to regularly commit, especially if you have to switch to other branches and don’t want to stash all your work. But you should remind that each commit must let the branch in a stable state: it will be easier to pick up a specific commit to another branch, revert a specific commit that doesn’t work as expected or just do a git bisect
without skipping commits.
You can add new files to the last commit with the git commit --amend
command instead of creating a new commit but there is a solution to merge commits easily with the interactive git rebase
command.
Let’s take our 20 commits adding a new line to a text file:
If my 20 commits haven’t been pushed to the remote repository yet, I can consider to merge them into a single commit.
The command to do this:
$ git rebase -i HEAD~20
Git will open editor with one line per commit:
pick b2be46f Commit 1
pick 7d028f1 Commit 2
pick 90b2d43 Commit 3
pick b08b7ae Commit 4
pick 95d6490 Commit 5
pick 3ed326e Commit 6
pick 0472b8e Commit 7
pick 87ec4b6 Commit 8
pick 4aa29a1 Commit 9
pick b83b606 Commit 10
pick d5bcde4 Commit 11
pick b8bda01 Commit 12
pick b84c747 Commit 13
pick 880e179 Commit 14
pick b4b2c0c Commit 15
pick c2bfa94 Commit 16
pick dc4579d Commit 17
pick 8082b63 Commit 18
pick f40292b Commit 19
pick bb09305 Commit 20
# Rebase 36b95b2..bb09305 onto 36b95b2
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit’s log message
# x, exec = run command (the rest of the line) using shell
#
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
#
If I want to merge my 20 commits, I can replace pick by squash or s for each commit except first one.
pick b2be46f Commit 1
s 7d028f1 Commit 2
s 90b2d43 Commit 3
s b08b7ae Commit 4
s 95d6490 Commit 5
s 3ed326e Commit 6
s 0472b8e Commit 7
s 87ec4b6 Commit 8
s 4aa29a1 Commit 9
s b83b606 Commit 10
s d5bcde4 Commit 11
s b8bda01 Commit 12
s b84c747 Commit 13
s 880e179 Commit 14
s b4b2c0c Commit 15
s c2bfa94 Commit 16
s dc4579d Commit 17
s 8082b63 Commit 18
s f40292b Commit 19
s bb09305 Commit 20
# Rebase 36b95b2..bb09305 onto 36b95b2
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit’s log message
# x, exec = run command (the rest of the line) using shell
#
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
#
If I save the content and close the editor, Git will merge the 20 commits into a single one and then open the editor (again) to display the 20 commits messages. I can keep or change my commits message, then save and close the editor to finish the merging process.
Now I have a single commit which adds 20 lines in the text file, instead of having 20 commits, each one adding only one line:
$ git log -1 -p
commit f523330f8db0030eadc41836b54713aac2baf18b
Author: lgiraudel <lgiraudel@mydomain.com>
Many commits instead of 20
diff --git file.txt file.txt
new file mode 100644
index 0000000..b636d88
--- /dev/null
+++ file.txt
@@ -0,0 +1,20 @@
+line number 1
+line number 2
+line numer 3
+line number 4
+line number 5
+line number 6
+line number 7
+line number 8
+line number 9
+line number 10
+line number 11
+line number 12
+line number 13
+line number 14
+line number 15
+line number 16
+line number 17
+line number 18
+line number 19
+line number 20
That’s all folks! I hope those tricks will help you in your daily work. Git bisect has deeply changed the way I search for weird bug : finding the guilty commit is easier than digging in the code. And merging commits before pushing help to keep a clean commit log.
]]>Although at this point, sorting numbers was not enough to me. I wanted to sort more. I wanted to sort everything! Thankfully, Sass 3.3 was providing me exactly what I needed: string manipulation functions. So I started hacking around to make a sorting function. It took me two days but eventually I did it.
That could have been the end of that if Sam Richards (a.k.a Snugug) had not put his Quick Sort implementation on my way. God, it was both fast and beautiful but… it was for numeric values only. Challenge accepted!
It didn’t take me long to update his function in order to sort anything, very quickly (actually as quickly as Ruby can get, which means, not much…). And I really enjoyed working on this, so I started implementing other famous algorithms in Sass, resulting in SassySort.
Note: I recently wrote an article about how to implement the Bubble Sort algorithm in Sass. If you haven’t read it, you should!
SassySort is now a Compass Extension, which means you can easily include it in any of your project.
gem install SassySort
in your terminalrequire 'SassySort'
to your config.rb
@import 'SassySort'
to your stylesheetIf you simply want to add a file to your project, you can get the dist file from the repository, copy and paste its content to your project and voila.
Then you’ve access to a neat little API:
$list: oranges pears apples strawberries bananas;
$sort: sort($list);
// => apples bananas oranges pears strawberries
That’s pretty much the end of it.
Note: also, I’ve asked SassMeister to include it, so you might be able to use it directly into SassMeister in the not-so-far future.
Looking back at my code, I think it’s pretty cool how I handled the whole thing.There are a couple of algorithms available but I wanted to keep the function name simple: sort()
and not bubble-sort()
or insertion-sort()
. So you can pass the algorithm name as argument.
$sort: sort($list, $algorithm: 'bubble');
This will use the Bubble Sort implementation, because of the way the sort()
function works:
@function sort($list, $order: $default-order, $algorithm: 'quick') {
@return call('#{$algorithm}-sort', $list, $order);
}
As you can see, the sort()
function does no more than defering the return to a sub-function named after the algorithm you ask for (e.g.%algorithm%-sort
). The default algorithm is quick
, as specified in the function signature but you can use bubble
, insertion
, shell
, comb
and selection
as well. However quick
is simply… quicker.
Depending on what you aim at doing with this sorting function, you might or might not encounter some issues if you are trying to sort words with unexpected characters. This is because Sass doesn’t have access to some universal sorting order or something; I had to hard-code the order to follow somewhere.
And this somewhere is in the $default-order
variable:
$default-order: '!' '#' '$' '%' '&' "'" '(' ')' '*' '+' ',' '-' '.' '/' '['
'\\'']' '^' '_' '{' '|' '}' '~''0' '1' '2' '3' '4' '5' '6' '7' '8' '9' 'a' 'b'
'c' 'd' 'e' 'f' 'g' 'h' 'i' 'j' 'k' 'l' 'm' 'n' 'o' 'p' 'q' 'r' 's' 't' 'u'
'v' 'w' 'x' 'y' 'z' !default;
As you can see, it only deals with a restricted amount of characters. Mostly special characters, numbers and letters. You might notice there are no uppercase letters. I decided I wouldn’t deal with case when sorting. It simply added to much complexity to sorting functions.
Anyway, if you need to add extra characters, you can override this list or make your own variable and pass it to the sort function as the $order
(2nd) argument.
$custom-order: ;
$sort: sort($list, $order: $custom-order);
Note that if an unrecognized character is found, it is skipped.
That’s pretty much it folks. If you really want to dig in the code of the algorithms, be sure to have a look at the repository however it was mostly JavaScript to Sass code conversion, so there is no magic behind it.
If you feel like implementing other sorting algorithms, be sure to have a shot and open an issue / pull-request.
]]>This is the 2nd part of the Git Tips & Tricks series from Loïc Giraudel. If you missed the first post, be sure to give it a read! And now fasten your belts folks, because this is some serious Git fu!
Hey people! I hope you enjoyed the first part of the series. In this one, I will introduce you even more tricks to improve the diff output, create some useful aliases and master (no pun intended) mandatory commands to be able to approach advanced Git concepts and commands. Ready?
Unix and Windows systems have different end-of-line characters. This is a problem when Windows and Unix developers work on the same Git project: Unix developers can see uglies ^M
symbols at the end of lines created by Windows developers.
To stop viewing those ^M
symbols, just change the whitespace
option:
$ git config --global core.whitespace cr-at-eol
By default, the git diff
command displays the filename with either a/
or b/
prefix:
$ git diff
diff --git a/Gruntfile.js b/Gruntfile.js
index 74d58f9..569449c 100755
––– a/Gruntfile.js
+++ b/Gruntfile.js
This prefix can be a little bit annoying when you want to quickly copy and past the filename (for instance to paste it in a git add
command). Thus, the prefix is quite useless so you can remove it in the diff output with the --no-prefix
parameter:
$ git diff --no-prefix
diff --git Gruntfile.js Gruntfile.js
index 74d58f9..569449c 100755
––– Gruntfile.js
+++ Gruntfile.js
To avoid adding the flag on every single diff command, you can make it a default option in your config:
$ git config --global --bool diff.noprefix true
Do you know that you can create your own Git aliases ?
$ git config --global alias.co "checkout"
$ git config --global alias.br "branch"
$ git config --global alias.cob "checkout -b"
$ git config --global alias.rh "reset HEAD"
$ git co master
$ git br someStuff origin/someStuff
$ git cob someStuff origin/someStuff
$ git rh myFile
My most used Git command is git status
but instead of creating an alias like git st
, I created a bash alias in my ~/.bashrc
file:
$ cat ~/.bashrc
[…]
alias gst="git status"
If your project has a deep directory tree, it can be useful to have a bash alias to go back to the root of the Git project in one line instead of multiple cd ..
commands or counting /..
in a cd ../../../..
command.
For unix systems, this alias looks like this (put it in your ~/.bashrc
file):
/home/workspace/myProject $ alias gr='[ ! -z `git rev-parse --show-cdup` ] && cd `git rev-parse --show-cdup || pwd`'
/home/workspace/myProject $ cd test/phpunit/apps/sso/lib/action/
/home/workspace/myProject/test/phpunit/apps/sso/lib/action $ gr
/home/workspace/myProject $
If you happen to be curious, feel free to explore the git rev-parse
command: it’s a magic command used by many other commands to do many different things. The manual page says:
"git-rev-parse - Pick out and massage parameters"
For instance, this command can convert a commit ref to a real SHA1:
$ git rev-parse HEAD~17
7f292beec1e55e33d911a942f59e942a04828935
It can return the .git
path of the current project:
$ git rev-parse --git-dir
/home/workspace/myProject/.git
It can return the relative path to go back to project root:
/home/workspace/myProject/test/phpunit/apps/sso/lib/action $ git rev-parse --show-cdup
../../../../../../
In Unix system, the default commit message editor is VI. To use your favorite editor, edit the core.editor option:
$ git config --global core.editor "~/Sublime\ Text\ 3/sublime_text -w"
Large scale projects have many Git branches: developers create new ones every day, do many merges, switch to branches created by workmates, co-develop features in shared branches and so on.
It’s possible to track a remote branch, which displays useful informations in the git status
command:
$ git status
# On branch infinite-scroll
# Your branch and 'origin/sharedBranches/frontendTeam/infinite-scroll' have diverged,
# and have 1 and 2 different commits each, respectively.
nothing to commit (working directory clean)
In the previous example, I’m on a local infinite-scroll branch which is tracking a sharedBranches/frontendTeam/infinite-scroll branch in the origin repository. My branch and the remote one have differed: my branch contains 1 commit which is not in the remote branch and the remote branch contains 2 commits which are not in my local branch. I will have to merge or rebase the remote branch if I want to push in the same remote location.
To track a remote branch you can type the following command:
$ git branch --set-upstream [name of the local branch] [name of the remote branch]
For instance:
$ git branch --set-upstream infinite-scroll origin/sharedBranches/frontendTeam/infinite-scroll
If you happen to be running Git version >= 1.8.0, you can use the -u
or --set-upstream-to
parameter:
$ git branch -u [remote branch]
$ git branch -u origin/sharedBranches/frontendTeam/infinite-scroll
$ git branch --set-upstream-to origin/sharedBranches/frontendTeam/infinite-scroll
When you create a new branch, you can specify a starting point. If this starting point is a remote branch (and not a local branch or a commit), the new branch will track the starting point.
$ git branch foo origin/master
Branch foo set up to track remote branch master from origin.
$ git checkout foo
Switched to branch 'foo'
Your branch is up-to-date with 'origin/master'.
This is the default behavior but can be changed in your configuration with the branch.autosetupmerge
parameter. The default value is true
but if you want to track the starting point even if it’s a local branch, switch it to always
.
$ git config --global branch.autosetupmerge always
$ git branch bar foo
Branch bar set up to track local branch foo.
If you don’t want to track the starting point neither it’s a local nor remote branch, use false
.
$ git config --global branch.autosetupmerge false
$ git branch foo origin/master
$ git checkout foo
Switched to branch 'foo'
$ git status
# On branch foo
nothing to commit, working directory clean
It’s quite easy to delete a local branch with the -d
and -D
parameters of git branch
command, but the syntax to delete a remote branch is not so instinctive. Actually you don’t really delete a remote branch per se; instead you push nothing to an existing destination.
The git push origin master
command is a shortcut to the command git push origin master:master
. The master:master
syntax means local-branch-name:destination-branch-name
. So to push nothing to a remote branch, you can use the following command:
$ git push origin :myBranch
Luckily, since Git 1.7.0, there is an easier syntax to do this:
$ git push origin --delete myBranch
Using a message template for Git commits is a good practice, especially in big projects with a lot of people involved. It helps finding commits relative to a specific feature, relative to a specific work team, etc.
To change the default template, you can write a small text file somewhere on your disk, then reference it in your Git configuration:
$ git config --global commit.template /home/loic/git/committemplate.txt
Here’s what my committemplate.txt
looks like:
$ cat /home/loic/git/committemplate.txt
[MyTeam] [#FeatureId] - Description of the feature
More informations about the feature
Unfortunately, it’s not possible to use a bash script instead of a text message, to — let’s say — dynamically add the branch name. Fortunately, the same thing can be done with Git hooks.
Hooking is a common programming pattern to allow user to improve the behavior of a software by allowing custom piece of code to run at a specific moment.
With Git, you can create a client-side hook running before user writes his commit message. A hook can retrieve some informations to pre-fill the commit message. Let’s create one in order to fill the commit message with the local branch name, shall we?
$ cat .git/hooks/prepare-commit-msg
#/bin/bash
branchname=`git rev-parse --abbrev-ref HEAD`
commitMsgFile=$1
commitMode=$2
# $2 is the commit mode
# if $2 == 'commit' => user used `git commit`
# if $2 == 'message' => user used `git commit -m '…'`
existingMsg=`cat $commitMsgFile`
if [ "$commitMode" = "message" ]; then
echo -n "[$branchname] " > $commitMsgFile
echo $existingMsg >> $commitMsgFile
else
firstline=`head -n1 $commitMsgFile`
# We check the fist line of the commit message file.
# If it’s an empty string then user didn’t use `git commit --amend` so we can fill the commit msg file
if [ -z "$firstline" ]; then
echo "[$branchname] " > $commitMsgFile
fi
fi
Now let’s try our new hook:
$ git checkout -b my-local-branch
Switched to a new branch 'my-local-branch'
$ echo 'Some text' > file1
$ git add file1
$ git commit
My text editor opens with the following content:
[my-local-branch]
I can update this message to add some informations. If I amend my commit to change the message, it shouldn’t overwrite my message :
$ git log -1 --oneline
cd2b660 [my-local-branch] This is my awesome feature.
$ git commit --amend
My text editor opens with the following content:
[my-local-branch] This is my awesome feature.
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch my-local-branch
# Changes to be committed:
# new file: file1
#
# Untracked files:
# commit
#
And if I try to commit with a -m
parameter:
$ echo 'Some text' > file2
$ git add file2
$ git commit -m 'This is another feature'
$ git log -1 --oneline
4d169f5 [my-local-branch] This is another feature
Now we’ve covered the basics, let’s move on to some advanced Git techniques. Those tricks get useful when you have a complex Git environment which can require:
Each commit must have only one purpose (c.f. Law #2 at the beginning of the Git Tips & Tricks - Part 1), but it’s easy to find some small mistakes when editing a file. If you don’t want to add those little fixes when you’re creating your commit in order to put them in a dedicated commit, the best way is to split the file modifications when adding the file to the staging area.
To do this you can use the --patch
(or -p
) parameter with the git add
command.
Let’s take an example:
$ echo "Here’s my tetx file" > file.txt
$ git add -A
$ git commit -m 'Initial commit'
I’ve just created a text file with only one line. Now, I just want to add a second line but while editing my file, I see that I wrote “tetx file” and not “text file” so I add my new line and I fix the first one in the same time. Let’s see what our diff looks like:
$ git diff
diff --git file.txt file.txt
index 6214953..1d54a52 100644
––– file.txt
+++ file.txt
@@ -1 +1,2 @@
-Here’s my tetx file
+Here’s my text file
+And this is the second line
If I want to split the two changes in two separate commits, I can use the --patch
parameter. Let’s try to create a first commit fixing the mistake and a second commit adding the new line:
$ git add --patch file.txt
diff --git a/file.txt b/file.txt
index 6214953..1d54a52 100644
––– a/file.txt
+++ b/file.txt
@@ -1 +1,2 @@
-Here’s my tetx file
+Here’s my text file
+And this is the second line
Stage this hunk [y,n,q,a,d,/,e,?]?
At the end of the git add
command, there is a prompt message asking me if I want to add this hunk to the commit. The available options are:
If I type e, the hunk will be opened in my text editor:
# Manual hunk edit mode -- see bottom for a quick guide
@@ -1 +1,2 @@
-Here’s my tetx file
+Here’s my text file
+And this is the second line
# –––
# To remove '-' lines, make them ' ' lines (context).
# To remove '+' lines, delete them.
# Lines starting with # will be removed.
#
# If the patch applies cleanly, the edited hunk will immediately be
# marked for staging. If it does not apply cleanly, you will be given
# an opportunity to edit again. If all lines of the hunk are removed,
# then the edit is aborted and the hunk is left unchanged.
The first commit should only fix the mistake so let’s remove the +And this is the second line line and save the change:
Now, if I launch a git status
command, I can see this:
$ git status
# On branch master
# Your branch is ahead of 'origin/master' by 1 commit.
#
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: file.txt
#
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: file.txt
#
My file is partially staged. If I want to see the staged part:
$ git diff --cached
diff --git file.txt file.txt
index 6214953..cc58d14 100644
––– file.txt
+++ file.txt
@@ -1 +1 @@
-Here’s my tetx file
+Here’s my text file
If I want to see the unstaged part:
$ git diff
diff --git file.txt file.txt
index cc58d14..1d54a52 100644
––– file.txt
+++ file.txt
@@ -1 +1,2 @@
Here’s my text file
+And this is the second line
Now, I can create my first commit easily, then create the second one:
$ git commit -m 'Typo fix'
[master 87edc4a] Typo fix
1 file changed, 1 insertion(+), 1 deletion(-)
$ git add file.txt
$ git commit -m 'Add of a new line'
[master a11a14e] Add of a new line
1 file changed, 1 insertion(+)
$ git log
commit a11a14ef7e26f2ca62d4b35eac455ce636d0dc09
Author: lgiraudel <lgiraudel@mydomain.com>
Add of a new line
commit 87edc4ad8c71b95f6e46f736eb98b742859abd95
Author: lgiraudel <lgiraudel@mydomain.com>
Typo fix
commit 3102416a90c431400d2e2a14e707fb7fd6d9e06d
Author: lgiraudel <lgiraudel@mydomain.com>
Initial commit
It’s sometimes useful to pick a commit from another branch to add it in the current branch.
The command to do this is really simple:
$ git cherry-pick [commit SHA1]
This command has some useful parameters:
-e
to edit git message-x
to add a line "Cherry-picked commit" in the commit message--no-commit
or -n
to apply the commit changes in the unstaged area (unstead of creating a commit in the branch)That’s it for today folks! In the next parts, we’ll deal with the following subjects:
Meanwhile keep practicing!
]]>Fast forward to the present day. Sass 3.3 has been released and with it a ton of new features. Today, I'd like to show you how to use some of these features to build a sorting function in Sass.
The heart of any sorting function is the ability to compare two strings and determine which one should go before the other. Most programming languages make this fairly easy, but to do this in Sass we have to build our own string comparison function.
For starters, we need to teach Sass the correct order to sort strings in based on the characters each string contains. Let's define this in a variable:
// Declare the default sort order. Use Sass's !default flag so this
// value doesn't override the variable if it was already declared.
$default-sort-order: a b c d e f g h i j k l m n o p q r s t u v w x y z !default;
This can be used to declare that strings that begin with a
should appear before strings that begin with b
or c
and so on. In real life you'd probably want to include other characters in your sort order string (like numbers, characters with accents, and other symbols), but a-z
works for our example.
Now for the meat of our comparison function:
@function str-compare($string-a, $string-b, $order: $default-sort-order) {
// Cast values to strings
$string-a: to-lower-case($string-a + unquote(''));
$string-b: to-lower-case($string-b + unquote(''));
// Loop through and compare the characters of each string...
@for $i from 1 through min(str-length($string-a), str-length($string-b)) {
// Extract a character from each string
$char-a: str-slice($string-a, $i, $i);
$char-b: str-slice($string-b, $i, $i);
// If characters exist in $order list and are different
// return true if first comes before second
@if $char-a and $char-b and index($order, $char-a) != index($order, $char-b)
{
@return index($order, $char-a) < index($order, $char-b);
}
}
// In case they are equal after all characters in one string are compared,
// return the shortest first
@return str-length($string-a) < str-length($string-b);
}
What's going on here? We are basically looping through the characters in each string ($string-a
and $string-b
) and looking up the location of each in the $order
list with the Sass index()
function. This gives us two numbers that can be compared to see which character goes before the other. If the numbers are the same we loop around to the next set of characters, but if they are different we've found which one goes first.
The str-compare()
function returns true
if $string-a
goes before $string-b
and false
if it does not.
For the sake of our example, I'm going to implement the sorting function using the Bubble Sort algorithm because it's easy to understand.
Since Bubble Sort relies on swapping two values in a list we need one more function to make this easy for us:
@function swap($list, $index-a, $index-b) {
@if abs($index-a) > length($list) or abs($index-b) > length($list) {
@return $list;
}
$tmp: nth($list, $index-a);
$list: set-nth($list, $index-a, nth($list, $index-b));
$list: set-nth($list, $index-b, $tmp);
@return $list;
}
Our new swap()
function accepts a list along with two indexes ($index-a
and $index-b
) that indicate the positions of the two items in the list to swap. To avoid cycling through the list to swap values, I've taken advantage of the set-nth()
function (new in Sass 3.3) which simply updates the list instead of building a fresh one (which is far better for performance).
Armed with str-compare()
and swap()
we now have everything we need to build a proper string sorting function:
@function sort($list, $order: $default-sort-order) {
// Cycle through each item in the list
@for $i from 1 through length($list) {
// Compare the item with the previous items in the list
@for $j from $i * -1 through -1 {
// abs() trick to loop backward
$j: abs($j);
// Compare both values
@if $j > 1 and str-compare(nth($list, $j), nth($list, $j - 1), $order) {
// If the item should go before the other, swap them
$list: swap($list, $j, $j - 1);
}
}
}
// Return the sorted list
@return $list;
}
Bubble Sort basically loops through the list, compares items with each other and swaps them once compared until the list is completely sorted.
Now let's test it:
$list: oranges pears apples strawberries bananas;
$sort: sort($list);
// => apples bananas oranges pears strawberries
Hurray! It works like a charm.
My first attempts to create a sorting function in Sass used a much slower algorithm. But thanks to some prompting by Sam Richards (he got me started with QuickSort) I eventually explored a number of different sorting algorithms. I've now implemented several of these in Sass. You can find the code and tests in the SassySort repository.
]]>The following is the first post of a series written by my dear brother Loïc, Git expert at Future PLC. I’ll release the next parts in the next few weeks, so be sure to stay tuned for more Git awesomeness!
Hi people! Today, I’m gonna share with you some Git tips & tricks I’ve already shared with my workmates at Future PLC. But before even starting let’s never forget the more important laws of Git.
Law #1: each commit must let the branch into a stable state. You must be able to checkout any commit in the project and still have a working application to play with. A functionality shouldn’t be splitted into several commits. For instance, don’t put the HTML, CSS and JS of a new feature in three separate commits: the functionality requires all of them to work so they should all belong to the same commit. If you have to pause your work (time to grab lunch, go home, switch to another thing or whatever), create a temporary commit which will be enhanced later.
Law #2: each commit has only one purpose. If you see a bug while you’re working on a new functionality, try to fix this bug in a separate commit to be able to revert (or cherry-pick) one of both commit if needed.
Ok, now let’s start with the real tips & tricks…
If you have to often switch from one branch to another (like a Git monkey), having a great prompt is quite useful to know what is the current branch you’re working on, if you have modified some files, if you have some commits to push or to pull from the server, and so on.
My favorite so far has been created by Tung Nguyen and can be found right here.
This prompt displays:
In this image, I’m working on the “myFork” branch and I have modified and/or staged some files but I don’t have any commit to push or to pull.
To install this prompt in a linux environment, just download it somewhere and update your ~/.bashrc file to add this line:
. /path/to/gitprompt
That’s it. Just re-open your terminal and go to a Git project directory.
This is the very basic when working with Git. Have you ever found yourself asking:
How am I supposed to find a specific commit relative to a specific part of code?
Thankfully there are quite a few ways to do this.
git log -p
The simplest is to use git log
. If you add -p
(or -u
or --patch
), you will have the modifier code of each commit, there’s nothing for it but to search in the output to find a specific string.
git log -S
A better method is to use the -S
parameter to search for a specific string: git log -S console.log
will search all commit which contains the string console.log
in the patch content. It’s better than the previous method because it doesn’t search in the commit message or information (username, date…) and it’s only searching in the patch content and not in the lines added before and after the patch content.
You can add several parameters to reduce the commits related to the search:
git log -S console.log --author lgiraudel --before="2013-10-01 00:00" --after="2013-06-01 00:00" -- web/js
git blame
git blame
displays each line of a file and the last commit which has modified the line. It’s the better way to find who, when and why a specific line has been added to a file. Actually the command name kind of speaks for itself: blame.
It requires a filepath to works:
$ git blame Gruntfile.js
15b95608 (Loic 2013-10-08 14:21:51 +0200 1) module.exports = function(grunt) {
15b95608 (Loic 2013-10-08 14:21:51 +0200 2)
15b95608 (Loic 2013-10-08 14:21:51 +0200 3) // Project configuration.
15b95608 (Loic 2013-10-08 14:21:51 +0200 4) var gruntConfig = {
15b95608 (Loic 2013-10-08 14:21:51 +0200 5) pkg: grunt.file.readJSON('package.json'),
0a1d4d7b (Alex 2013-10-25 18:15:36 +0200 42) jshint: {
0a1d4d7b (Alex 2013-10-25 18:15:36 +0200 43) all: [
0a1d4d7b (Alex 2013-10-25 18:15:36 +0200 44) 'Gruntfile.js',
0a1d4d7b (Alex 2013-10-25 18:15:36 +0200 45) '/web/js/**/*.js'
0a1d4d7b (Alex 2013-10-25 18:15:36 +0200 46) ],
0a1d4d7b (Alex 2013-10-25 18:15:36 +0200 47) options: {
0a1d4d7b (Alex 2013-10-25 18:15:36 +0200 48) jshintrc: '.jshintrc'
0a1d4d7b (Alex 2013-10-25 18:15:36 +0200 49) }
0a1d4d7b (Alex 2013-10-25 18:15:36 +0200 50) },
15b95608 (Seb 2013-10-08 14:21:51 +0200 6) jasmine: {
df9b1c21 (Seb 2013-10-11 11:50:08 +0200 7) src: 'web/js/**/*.js',
df9b1c21 (Seb 2013-10-11 11:50:08 +0200 8) options: {
df9b1c21 (Seb 2013-10-11 11:50:08 +0200 9) vendor: [
[…]
It’s possible to limit the output to specific lines with the parameter -L
: git blame -L 10,20
will only output lines 10 to 20.
git diff
is one of the most used Git command before adding changes to the stage area to avoid pushing mistakes to the repository. The diff command can be customized to avoid some inconveniences.
In the diff output, each change is displayed like this :
$ git diff
diff --git a/Gruntfile.js b/Gruntfile.js
index 74d58f9..569449c 100755
––– a/Gruntfile.js
+++ b/Gruntfile.js
@@ -41,7 +41,7 @@ module.exports = function(grunt) {
},
jshint: {
all: [
- 'Gruntfile.js',</span>
+ 'gruntfile.js',</span>
'/web/js/**/*.js'
],
options: {
But if you use the --color-words
parameter, it will write the old and new text in the same line with red and green colors, which can be easier to read in some cases.
When spaces are added in a line, the git diff
command displays the line as changed. When you read your changes before creating a commit, this can be annoying to understand the diff, especially when the spaces have been added/removed by your IDE (useless spaces, fix indentation, replace tab by spaces, etc.).
To avoid this pollution in the git diff, you can add the -w
option to omit spaces (and tabs) changes.
Let’s take an explicite example:
$ git diff
diff --git a/web/js/testedJs/lazy.js b/web/js/testedJs/lazy.js
index b2185a2..887387f 100755
––– a/web/js/lazy.js
+++ b/web/js/lazy.js
@@ -427,28 +427,30 @@
return;
}
- if (url !== null && url !== '' && typeof url !== 'undefined') {
- jQuery.ajax({
- url: url,
- type: 'GET',
- context: this,
- dataType: $elem.data('tmpl') !== undefined ? 'json' : 'html',
- success: self.injectData,
- error: oConf.onError || self.error,
- beforeSend: function () {
- $elem.addClass('loading');
- if (oConf.onLoad && 'function' === typeof oConf.onLoad) {
- oConf.onLoad();
- }
- },
- complete: function () {
- $elem.removeClass('loading');
- if (oConf.onComplete && 'function' === typeof oConf.onComplete) {
- oConf.onComplete();
- }
- }
- });
+ if (url === null || url === '' || typeof url === 'undefined') {
+ return;
}
+
+ jQuery.ajax({
+ url: url,
+ type: 'GET',
+ context: this,
+ dataType: $elem.data('tmpl') !== undefined ? 'json' : 'html',
+ success: self.injectData,
+ error: oConf.onError || self.error,
+ beforeSend: function () {
+ $elem.addClass('loading');
+ if (oConf.onLoad && 'function' === typeof oConf.onLoad) {
+ oConf.onLoad();
+ }
+ },
+ complete: function () {
+ $elem.removeClass('loading');
+ if (oConf.onComplete && 'function' === typeof oConf.onComplete) {
+ oConf.onComplete();
+ }
+ }
+ });
};
/**
What are the important updates in this piece of code? It’s not quite easy to check what have been done with a diff like this. But with the -w
option:
$ git diff -w
diff --git a/web/js/testedJs/lazy.js b/web/js/testedJs/lazy.js
index b2185a2..887387f 100755
––– a/web/js/lazy.js
+++ b/web/js/lazy.js
@@ -427,7 +427,10 @@
return;
}
- if (url !== null && url !== '' && typeof url !== 'undefined') {
+ if (url === null || url === '' || typeof url === 'undefined') {
+ return;
+ }
+
jQuery.ajax({
url: url,
type: 'GET',
@@ -448,7 +451,6 @@
}
}
});
- }
};
/**
It’s now easier to catch up with the changes: I’ve replaced the test wrapping my Ajax call by a 3-lines test checking right before, which reduces the indentation level of the Ajax call.
I hope those little tricks will help. In the next part, I’ll continue with other small smart tricks before tackling some advanced Fit useful features.
]]>Anyway, I had a couple of minutes to kill the other day so I opened new pen and started writing a little button library. Yes, another one! Actually my point wasn’t to improve anything, I just wanted to code some Sass, just for the sake of it.
Anyway, I came up with some interesting things and Stuart suggested I wrote a little something about it so here we are.
My point was to create a base class and a couple of modifiers to be used along with the base class using the brand new &--modifier
syntax. Then you can stack as many modifiers as you want as long as they don’t conflict with each others (multiple color schemes for instance).
Also the code should be DRY and the CSS output well optimized. As light as it can be! And last but not least, the most important pieces of configuration should be handled with a couple of variables to avoid digging into the code.
Let’s start with the configuration, shall we?
// Configuration
$btn-name: 'button' !default;
$btn-size-ratio: 1.2 !default;
$btn-hover: saturate 25% !default;
$btn-border: darken 20% !default;
$btn-background: (
'default': #565656,
'success': #468847,
'danger': #b94a48,
'warning': #c09853,
'info': #3a87ad,
) !default;
Everything might not be intuitive so let me explain what each variable is for:
$btn-name
is the name of the module (e.g. the base class).$btn-size-ratio
is the ratio used for small and large modifiers. Basically large is $btn-size-ratio
times bigger than usual, while small is $btn-size-ratio
smaller than usual.$btn-hover
is a 2 items long list, the first item being the Sass color manipulation function used, while the second is the argument for this function (e.g. saturate 25%
).$btn-border
kind of works the same way; if not false, it defines the function used to compute the border-color based on the button color. If false
, it just disables the border.$btn-background
is a map of all color schemes; every color is mapped to a name so a modifier like .button--default
will make a grey button.Also note the 2 measures we take to avoid conflicts with user’s code:
!default
flag for each variable,$btn-
.#{$btn-name} {
// Default styles
padding: 0.5em;
margin-bottom: 1em;
color: #fff;
// Some sex appeal!
transition: background 0.15s;
border-radius: 0.15em;
box-shadow: inset 0 1px rgba(255, 255, 255, 0.15);
// Border or not border?
border: if($btn-border, 1px solid, none);
// Modifiers
&--large {
font-size: 1em * $btn-size-ratio;
}
&--small {
font-size: 1em / $btn-size-ratio;
}
&--bold {
font-weight: bold;
}
&--upper {
text-transform: uppercase;
}
&--block {
display: block;
width: 100%;
}
// Color schemes
@each $key, $value in $btn-background {
&--#{$key} {
@include button-color($value);
}
}
}
Here is how it works: we define everything inside a unique CSS selector named after the $btn-name
variable. For each modifier, we use &--modifier
which outputs a .button--modifier
rule at CSS root. I have made a couple of simple modifiers but you could make as many as you want of course.
You can see we make the border conditional thanks to the ternary if()
function. If $btn-border
is set to false, then we hide the border, else we add a 1px solid border without specifying any color for now.
Regarding color schemes, we simply loop through the $btn-background
map, and call a button-color
mixin passing the color as unique argument. Elegant.
The button-color
mixin aims at dealing with color schemes. We have set up quite a few color schemes in the $btn-background
map over which we’ve iterated to apply those color to the classes they belong to.
Now the mixin will actually apply the background-color to the button, as well as the hover/active state, and the border if not set to false.
@mixin button-color($color) {
background-color: $color;
&:hover,
&:active {
background: call(nth($btn-hover, 1), $color, nth($btn-hover, 2));
}
@if $btn-border != false {
border-color: call(nth($btn-border, 1), $color, nth($btn-border, 2));
}
}
Remember what our $btn-hover
and $btn-border
variables look like? First a color function, then a percentage. To apply this function to the color, we can use the call
feature from Sass 3.3.
call
function calls the function named after the first argument if it exists, passing it all the remaining arguments in the same order. So in our case, the first call
will be saturate($color, 25%)
. Meanwhile the second one works the same except it first checks whether the variable is not false. In case $btn-border
is set to false
, we should not output the border-color.
I don’t know for you, but I don’t like letting the compiler fail. I’d rather handle the potential errors myself; I feel like it’s better for the end user.
So we should probably make a couple of checks to make sure everything’s right before dumping our CSS in the button-color
mixin. Here is how I did it:
@mixin button-color($color) {
$everything-okay: true;
// Making sure $color is a color
@if type-of($color) != color {
@warn "`#{$color}` is not a color for `button-color`";
$everything-okay: false;
}
// Making sure $btn-hover and $btn-border
// are 2 items long
@if length($btn-hover) != 2 or length($btn-border) != 2 {
@warn "Both `$btn-hover` and `$btn-border` should be two items long for `button-color`.";
$everything-okay: false;
}
// Making sure first items from $btn-hover and $btn-border
// are valid functions
@if not
function-exists(nth($btn-hover, 1)) or not
function-exists(nth($btn-border, 1))
{
@warn "Either `#{nth($btn-hover, 1)}` or `#{nth($btn-border, 1)}` is not a valid function for `button-color`.";
$everything-okay: false;
}
// Making sure second items from $btn-hover and $btn-border
// are percentages
@if type-of(nth($btn-hover, 2)) !=
number or
type-of(nth($btn-border, 2)) !=
number
{
@warn "Either `#{nth($btn-hover, 2)}` or `#{nth($btn-border, 2)}` is not a valid percentage for `button-color`.";
$everything-okay: false;
}
// If there is no mistake
@if $everything-okay == true {
// Mixin content
}
}
Always validate user input in your custom functions. Yes, it takes a decent amount of space. Yes, it makes the mixin longer. Yes, it’s a pain in the ass to write. On the other hand, if the user makes a mistake with one of the arguments, he’ll know what’s going on, or why the mixin didn’t output anything.
Note how we use the new function-exists
from Sass 3.3 to make sure the functions set in $btn-border
and $btn-hover
variables actually exists. We could push the tests further by making sure it’s one of saturate
, desaturate
, darken
, lighten
, adjust-hue
, grayscale
, complement
or invert
but I feel like we already do a pretty good job covering potential mistakes here.
The module is quite simple right now but I feel like it introduces a couple of often overlooked and/or new notions like call
, function-exists
, @warn
, map
, BEM 3.3…
You can have a look at the final code here:
See the Pen (Another) Sass button lib by Kitty Giraudel (@KittyGiraudel) on CodePen.
]]>Spoilers! This post is the solution of a CSS riddle proposed in a previous article.
Time’s up people! First, thanks for playing. There have been quite a few proposals, all of them very interesting in their own way. In the end, I think the riddle was slightly easier than expected but it’s pretty cool to dig into your code to see how you’ve worked around the problem.
Among the possible solutions, I thought about:
In this post I will be explaining my solution step by step and I’ll end the article by talking about some of the clever proposals you sent me.
First, let’s give to Caesar what belongs to Caesar: the original idea comes from Ana Tudor which I then revisited to make it backward-compatible, decent on small screens, easily maintainable with Sass and so on. So thanks Ana!
Then, be sure to know there is nothing magic in this trick. As a proof, some of you came up with a very similar solution. The main idea behind it is to use pseudo-elements to draw the invisible circle and apply a background color to the cropped sections. So for each box, the not-cropped part is colored with a background-color rule, while the cropped part is made of a huge box-shadow (55em
spread, no blur) on an absolutely positioned pseudo-element.
<ul class="boxes">
<li class="box box--top box--left box--alpha">
<section class="box__content">
<header class="box__header"></header>
<footer class="box__footer box__cut"></footer>
</section>
</li>
<li class="box box--top box--right box--beta">
<section class="box__content">
<header class="box__header"></header>
<footer class="box__footer box__cut"></footer>
</section>
</li>
<li class="box box--bottom box--left box--gamma">
<section class="box__content">
<header class="box__header box__cut"></header>
<footer class="box__footer"></footer>
</section>
</li>
<li class="box box--bottom box--right box--delta">
<section class="box__content">
<header class="box__header box__cut"></header>
<footer class="box__footer"></footer>
</section>
</li>
</ul>
As you can see I added a couple of classes to make the code DRYer:
.box--left
to left boxes,.box--right
to right boxes,.box--top
to top boxes.box--bottom
to bottom boxes,.box__cut
to the cropped section of each box (.box__footer
for top boxes, .box__header
for bottom boxes).Also every box has its own name like .box--alpha
. This is meant to be able to apply color based on a Sass map.
Using Sass really helped me achieving such a tricky component. Thanks to Sass variables, it’s getting easy to maintain support for small screens, old browsers or simply update the gutter size or the invisible circle radius.
$gutter: 2em;
$mask-size: 12em; // Invisible circle
$circle-size: 5em; // Inner disk
$breakpoint: 700px;
$border-radius: 0.25em; // Boxes radius
$colors: (
alpha: #1abc9c,
beta: #2ecc71,
gamma: #3498db,
delta: #9b59b6,
);
Everything is computed from there. There will be absolutely no magic number anywhere.
Let’s start with applying some default styles to our element (.boxes
, .box
…).
// Boxes wrapper
// 1. Clearing inner float
// 2. Enabling position context for pseudo-element
.boxes {
list-style: none;
padding: 0 $gutter;
margin: 0;
overflow: hidden; // 1
position: relative; // 2
// Central dark disk
&:after {
content: '';
position: absolute;
width: $circle-size;
height: $circle-size;
top: 50%;
left: 50%;
margin: -$circle-size/2 (0 0) -$circle-size/2;
border-radius: 50%;
border: 0.5em solid #2c3e50;
background: #34495e;
// Hiding it on small screens
@media (max-width: $breakpoint) {
content: none;
}
// Hiding it on browsers not supporting box-shadow/border-radius/pseudo-elements
// Thanks to Modernizr
.no-boxshadow & {
content: none;
}
}
}
I think the code kind of speaks for itself until there. The :after
pseudo-element is used to create the central dark disk. It is absolutely centered, sized according to Sass variables and so on. We remove it on small screens and unsupported browsers.
One of the rules of the game was to keep the same gutter between left and right boxes and top and bottom boxes. Let’s start with the easiest of both: vertical gutter.
.box {
float: left;
width: 50%;
margin: $gutter 0;
// Moving them back to a single column on small screens
@media (max-width: $breakpoint) {
width: 100%;
float: none;
}
}
Boxes spread across half the width of the parent. Some of you people did use calc
to handle the gutter between left and right boxes right away but it lowers the browser support so we’ll do it differently. For horizontal gutter, here is how we can handle it:
// Inner box wrapper
.box__content {
// Adding a right padding on left boxes for the central gutter
.box--left & {
padding-right: $margin;
}
// Adding a left padding on right boxes for the central gutter
.box--right & {
padding-left: $margin;
}
// Removing padding on small screens
@media (max-width: $breakpoint) {
padding: 0 !important;
}
}
There we go. Since we are using a clean box model (i.e. box-sizing: border-box
), we can add a padding to the inner wrapper (section
) — left or right depending on their position — in order to simulate the horizontal gutter. No need for calc.
If you want to get rid of the sections at all cost, you can use calc
however you end up hacking around for Internet Explorer 8 to have gutters. Not an interesting trade-off in my opinion, but that would make the code lighter and more elegant for sure.
Yes, finally. As I explained at the beginning of the article, the idea consists on simulating background on cropped parts with an absolutely positioned pseudo-element spreading a huge box-shadow.
// Part that is being truncated by the circle
// 1. Removing background color
// 2. Making sure the box-shadow from pseudo-element doesn’t leak outside the container
// 3. Enabling position context for pseudo-element
.box__cut {
background: none !important; // 1
overflow: hidden; // 2
position: relative; // 3
// Transparent circle
// 1. Moving it on a lower plan
// 2. Applying a very large box-shadow, using currentColor as color
&:after {
content: '';
position: absolute;
width: $mask-size;
height: $mask-size;
z-index: -1; // 1
border-radius: 50%;
margin: -($mask-size / 2 + $margin);
box-shadow: 0 0 0 55em; // 2
// Hiding it on small screens
@media (max-width: $breakpoint) {
content: none;
}
}
// Positioning transparent circle for left boxes
.box--left &:after {
right: 0;
}
// Positioning transparent circle for right boxes
.box--right &:after {
left: 0;
}
// Positioning transparent circle for top boxes
.box--top &:after {
bottom: 0;
}
// Positioning transparent circle for bottom boxes
.box--bottom &:after {
top: 0;
}
}
Last but not least, we have to apply colors all over our code like some sort of rainbow unicorn on extasy. Thankfully we made a map binding each box to a fancy color from FlatUIColors.
// Applying colors by looping on the color map
@each $key, $value in $colors {
// Targeting the box
.box--#{$key} {
// Applying background colors
.box__header,
.box__footer {
background: $value;
}
// Will be used a color for box-shadow
.box__cut {
&:after {
color: darken($value, 10%);
}
// Applying background for small screens
// since the pseudo-element will be hidden
@media (max-width: $breakpoint) {
background: darken($value, 10%) !important;
}
// Applying background on browsers not supporting box-shadow/border-radius/pseudo-elements
.no-boxshadow & {
background: darken($value, 10%) !important;
}
}
}
}
We could have used advanced CSS selectors (e.g. :nth-of-type
) to avoid having to name boxes however that would require either a polyfill for Internet Explorer 8, or another way to select box one by one. Not much point in using fancy selectors then.
Some of you used the same trick with borders instead of box-shadows. I think the main pro of using box-shadows is it doesn’t conflict with the box-model since it’s being rendered on its own layer. When you’re dealing with borders, you have to make sure you include the border in the width/height if you’re using box-sizing: border-box
. And if you don’t… well that’s stupid, this property is golden.
However the major downside of box-shadows is they can be quite intensive for the CPU/GPU, causing expensive repaint when scrolling, especially on older browsers like Internet Explorer 9.
When it comes to Internet Explorer 8, or actually any browser not supporting any of the 3 major properties (pseudo-elements, box-shadow, border-radius, pick the lowest common denomitor which happens to be box-shadow), we simply apply a appropriate background color to the .box__cut
elements. No circle, no big deal.
Giulia Alfonsi, Lokesh Suthar, One div, mh-nichts and Hugo Darby-Brown made it either with borders or box-shadows. Some of them did use calc
for positioning/sizing although that wasn’t necessary. Good job people.
Rafał Krupiński came up with a solution using radial-gradients. Even better, he used calc
in the radial-gradients declaration to keep things fluid. You’ve to admit that’s clever. His solution is probably the one involving the lowest amount of code, at the price of browser support though. Anyway, congratulations Rafał!
I was hoping for one, Gaël Poupard did it: a solution with clip-path
. Plus his code is fully commented so be sure to have a look at this beauty. Nice one Gaël!
Last but not least, Vithun Kumar Gajendra made an interesting demo animating the pseudo-elements to show the trick. Note he used duplicated background-image on pseudo-elements rather than box-shadows/borders, that’s a cool one too!
Anyway, you can have a look at my fully commented pen here:
See the Pen b8e914a2caf8090a9fffa7cf194afc18 by Kitty Giraudel (@KittyGiraudel) on CodePen.
]]>I was both surprised and pleased to see they are using Sass for their CSS codebase, and more interestingly, they are using it pretty well if I may. Their code looked both logic and efficient so that was kind of a cool pen to look at.
Although after a couple of minutes digging into their code, I noticed the CSS output wasn’t as good as it could be. A couple of minutes later, I submitted a new verion to them, taking care of a few optimizations they forgot.
Hence, a short blog post relating all this.
First of all, the way they approach the whole widget is very clever. To deal with half-star ratings, they use left and right borders instead of background-color. This way, they can color only half of the background for a star. This is brilliant.
So the few things I noticed were definitely not about their idea but more the way they use Sass. The first and most obvious mistake is they output a rule for 5.5-stars rating which simply cannot exist since it goes from 1 to 5.
.rating-5-half .star-6 {
border-left-color: #dd050b;
}
Next and probably the biggest flaws in their code, they got a lot of duplicated rules. It’s not terrible but it could definitely be improved. Here is a little section of their output:
.rating-3 .star-1,
.rating-3-half .star-1 {
border-color: #f0991e;
background: #f0991e;
}
.rating-3 .star-2,
.rating-3-half .star-2 {
border-color: #f0991e;
background: #f0991e;
}
.rating-3 .star-3,
.rating-3-half .star-3 {
border-color: #f0991e;
background: #f0991e;
}
This is only for 3-stars ratings, but it’s the same for other ratings as well. We could merge the selectors into one in order to have a single rule with only two declarations in there which would be much better.
Last but not least, their stars-color
function returning a color based on a number (of stars) is repetitive and could be refactored.
@function stars-color($num) {
@if $num == 5 {
@return #dd050b;
} @else if $num == 4 {
@return #f26a2c;
} @else if $num == 3 {
@return #f0991e;
} @else if $num == 2 {
@return #dcb228;
} @else if $num == 1 {
@return #cc8b1f;
}
}
One thing I’ve been surprised to see is they use classes instead of data-attributes for their ratings. In my opinion the only valid option to do so is because you still have to support Internet Explorer 6 but I’m not sure Yelp does. So I decided to move everything to data-attributes.
<!-- No more -->
<div class="rating rating-1-half"></div>
<!-- Instead -->
<div class="rating" data-rating="1.5"></div>
There are two main reasons for this. The first one is it allows me to use data-attributes modulators to target both x
and x.y
by doing data-rating^='x'
. This may seem insignificant but it makes a selector like .rating-1 .star-1, .rating-1-half .star-1
turn into [data-rating^='1'] .star-1
. Much shorter.
Another interesting about moving to data-attributes is it makes any JavaScript enhancement much lighter. Needless to say it’s easier to parse a numeric data-attribute than to parse a string in class lists. But that’s clearly out of the scope of this article though.
stars-color
functionWe’ll start with the simplest thing we can do to improve the code: refactoring the stars-color
function. My idea was to have a list of colors (sorted from the lowest rating to the best one) so we can pick a color from its index in the list.
@function stars-color($stars) {
@if type-of($stars) != number {
@warn '#{$stars} is not a number for `stars-color`.';
@return false;
}
$colors: #cc8b1f #dcb228 #f0991e #f26a2c #dd050b;
@return if($stars <= length($colors), nth($colors, $stars), #333);
}
Here we have a $colors
Sass list containing 5 colors, the first being the color for 1 and 1.5 ratings, and the last for 5-stars ratings. The function accepts a single argument: $stars
which is the rating.
Then all we have to do is check if $stars
is a valid index for $colors
. If it is, we return the color at index $stars
, else we return a default color (here #333
). Simple and efficient.
Also note how we make our function secure by making sure $stars
is a number. When building custom functions, always think about data validation. ;)
Yelp Devs are using nested loops to output their CSS. The outer loop goes from 1 through 5 and the inner one is going from 1 to the value of the outer loop. So during the first loop run of the outer loop, the inner loop will go from 1 through… 1. During the second, from 1 through 2, and so on.
Because it does the work well and is quite smart, I kept this as is. However I decided not to output anything in the inner loop and instead use it to build a compound selector to avoid duplicated CSS rules.
@for $i from 1 to 5 {
$color: stars-color($i);
$selector: ();
@for $j from 1 through $i {
$selector: append(
$selector,
unquote("[data-rating^='#{$i}'] .star-#{$j}"),
comma
);
}
#{$selector} {
border-color: $color;
background: $color;
}
[data-rating='#{$i + 0.5}'] .star-#{$i + 1} {
border-left-color: $color;
}
}
This may look a little complicated but I can assure you it is actually quite simple to understand. First, we retrieve the color for the current loop run and store it in a $color
variable to avoid having to get it multiple times. We also instanciate an empty list named $selector
which will contain our generated selector.
Then we run the inner loop. As we’ve seen previously, the inner loop goes from 1 through $i
, and it doesn’t do much. The only thing that is going on inside the inner loop is appending a piece of selector to the selector list.
Once we get off the inner loop, we can use the generated selector to dump the rules. For instance, if $i = 2
, $selector
equals [data-rating^='2'] .star-1, [data-rating^='2'] .star-2
. It succeeds in targeting stars 1 and 2 in ratings going from 1 to 2.5.
Last but not least, we need to deal with half-ratings. For this, we only have to dump a selector specifically targeting half ratings to have a result like this: [data-rating='2.5'] .star-3
. Not that hard, is it?
You may have noticed from the last code snippet the outer loop is not dealing with 5-stars rating because it goes from 1 to 5
(5 excluded) and not 1 through 5
(5 included). This is meant to be in order to optimize the CSS output for 5-stars rating.
There are 2 things that are different in this case:
Then dealing with this case is as easy as writing:
$color: stars-color(5);
[data-rating='5'] i {
border-color: $color;
background: $color;
}
To see how efficient those little optimizations have been, I’ve minified both demo:
And here is what the loops' output looks like in my case:
[data-rating^='1'] .star-1 {
border-color: #cc8b1f;
background: #cc8b1f;
}
[data-rating='1.5'] .star-2 {
border-left-color: #cc8b1f;
}
[data-rating^='2'] .star-1,
[data-rating^='2'] .star-2 {
border-color: #dcb228;
background: #dcb228;
}
[data-rating='2.5'] .star-3 {
border-left-color: #dcb228;
}
[data-rating^='3'] .star-1,
[data-rating^='3'] .star-2,
[data-rating^='3'] .star-3 {
border-color: #f0991e;
background: #f0991e;
}
[data-rating='3.5'] .star-4 {
border-left-color: #f0991e;
}
[data-rating^='4'] .star-1,
[data-rating^='4'] .star-2,
[data-rating^='4'] .star-3,
[data-rating^='4'] .star-4 {
border-color: #f26a2c;
background: #f26a2c;
}
[data-rating='4.5'] .star-5 {
border-left-color: #f26a2c;
}
[data-rating='5'] i {
border-color: #dd050b;
background: #dd050b;
}
Looks quite efficient, doesn’t it?
In the end, it’s really not that much; saving 800 bytes is quite ridiculous. However I think it’s interesting to see how we can use some features like Sass lists (often overlook by dervelopers) to improve CSS output.
Thanks to Sass lists and the append
function, we have been able to create a selector from a loop and use this selector outside the loop to minimize the amount of CSS that is being compiled. This is definitely something fun doing, even if it needs to roll up the sleeves and hack around the code.
Hope you liked the demo anyway folks. Cheers!
See the Pen CSS Rating Stars by Kitty Giraudel (@KittyGiraudel) on CodePen.
Update: be sure to check this version from Mehdi Kabab, using placeholders to make it slightler lighter (14 bytes after gzip… :D).
]]>First of all, this is what you should come up with:
Obviously the difficult part is the transparent circle in the middle of the picture, not adding border-radius to the boxes. Anyway, as you can see we got 4 boxes (2 per row), each with its own color scheme because it’s prettier. On the middle of the frame, the four boxes are kind of cropped to make place to some kind of invisible circle. And in this circle there is a dark disk.
Note: this is not an image I made on Photoshop or whatever, this is the result I ended up with.
There are no games without rules, so let me give you some constraints for the exercise, alright?
ul > li > section > header + footer
(I came up with a solution to ditch the section
element but it removes IE 8 support, see below),Feel free to add as many classes and attributes as needed, and to use a CSS preprocessor if you feel more comfortable with it. I have no problem with this whatsoever.
Regarding browser support, I came up with a solution working from Internet Explorer 9 gracefully degrading on Internet Explorer 8. As far as I know, you simply can’t do this on IE 8 without images (or SVG or whatever).
y much it. In a week or so, I’ll update the post with my solution and I’ll talk about the more creative and effective proposals you gave me. Surprise me people, and be sure to have fun doing it. It’s a tricky CSS brain-teaser, I’m sure you’re going to love it. ;)
To help you start, I created a very basic CodePen you can fork and link to in the comments.
Game on!
]]>The official syntax for this has yet to be determined and as of writing there are two proposals grabbing some attention:
:has()
pseudo-class (e.g. X:has(Y)
)^
operator (e.g. ^X Y
) ; an old proposal also mentions !
instead of ^
but the idea is the sameI think it should be :has()
. Definitely. And here is why.
ses is how obvious the :has()
proposal is. It speaks for itself. One thing I always liked in CSS is the ability to understand the selectors just by reading them out loud. When you see something like this:
CSS selectors can be understood by reading them out loud.
a: has(B);
… you only have to read it to understand it: I want to select all elements A
containing at least one element B
. You can try it for pretty much all CSS selectors, it works pretty well. The exception could be ~
(and >
in a lesser extend) which isn’t as obvious as it should be.
Anyway, we got a first problem with ^
here: it doesn’t make any sense. You have to know it to understand it. This is rather bad in my opinion but I guess it’s not terrible and can still be a valid candidate for the parent selector.
Moving on.
The “ah-ah moment” I had a while back about CSS was that the target (refered as subject in the specifications) of a CSS selector is always at the end of it. That’s also a reason why CSS parsers read selectors from right to left and not left to right. Because this is the way it makes sense.
nav: hover span;
In this example, span
is the target. Not nav
or a:hover
. Just span
. This is the element you’re willing to style. The remaining pieces of the selector are nothing but the context. You may think of it this way:
span
!a
in nav
!Adding a pseudo-class or a pseudo-element to the last element from the selector doesn’t change the target, it only adds some more context on the target itself.
nav a:hover span:after
The last element is still the target of the selector, although now it’s not only span
but span:after
. Now back to our discussion, plus I’m sure you can see the problem now.
The ^
character — or whatever character could it be — breaks that rule and this is rather bad in my opinion. When you see ^A B
, the target is no longer B
, it’s A
because of this little character right on its left.
Meanwhile :has()
being a pseudo-class it preserves this golden rule by keeping the selector’s target to the end. In A B:has(C)
, there are only two dissociable parts: A
and B:has(C)
. And as you can see, the target (B:has(C)
) is still at the end of the selector.
Not only :has()
is both more readable and more understandable, but it also goes very well with the existing pseudo-classes, especially :not()
and :matches()
(aliased as :any()
in Firefox) which both work in the exact same way.
Having meaningful pseudo-classes can make a huge difference. There are reasons why we have :not()
and not !
as a negative operator. Because A:not(B):has(C)
is easier to read than ^A!B C
.
Actually the single fact :not()
already exists as is in the spec is enough to make :has()
the only valid choice to this whole discussion.
Also, no selector should start with an operator. You can’t write something like > A
or ~ B
so why should you be able to write something like ^ A B
? On the other hand, starting a selector with a pseudo-class/pseudo-element, while uncommon, is definitely valid (e.g. :hover
).
There are still edge cases I don’t really see handled with the single character notation. For instance, what happens if there are multiple occurrences of the ^
symbol in the selector, like:
A ^B ^C D
What happens here? What is the selector’s target? Is it C
? Is it D
? We don’t know and more accurately: we can’t know. According to the specifications, a selector like ^A ^B
would result in all B
contained in A
and their containing A
elements. Needless to say it’s getting crazy. If you ask me, this should simply throw an error (which, in CSS, is equivalent to skip that shit and move on).
On the other hand, the pseudo-class proposal makes it very easy to allow multiple occurrences of itself in a selector. Even better, chaining and nesting are possible.
a: has(B:has(C));
This means we are looking for all A
elements containing at least a B
element, himself containing at least a C
element. Doing this with the other syntax is close to impossible and if we can come up with a solution, would it be as clean as this one?
^
There are two major pros for the single character proposal:
^
or !
is very easy and takes no mare than a single keypress. Meanwhile, typing :has()
takes 6 keypresses including a mix of letters and special characters. Sounds silly but that’s definitely longer to type.That being said, I really don’t see this as an interesting trade-off. Having consistent and robust selectors is far more important than having to type a couple of extra characters.
If you ask me, the ^
proposal (or !
for that matter) sucks. Syntactically it’s very poor and messy. I don’t think it should even be considered. The only fair pro I can see is it’s shorter which is definitely not a good reason to consider it as a solid candidate for parent selector.
Meanwhile, :has()
is robust, simple and very permissive. It’s the One folks.
Update: the ^
combinator is already used in Shadow DOM where it is a descendant selector crossing a single shadow boundary. More informations on this stuff at HTML5Rocks.
Yesterday, famous French frontend developer Rémi Parmentier proposed a little CSS brain-teaser on his blog and you know how much I like riddles. I am kind of a CSS version of Gollum from The Hobbit - An unexpected journey. Nevermind.
I gave it a go and it turned out to be much easier than I firstly expected. No weird cross browser issue, no dirty hack and mostly just plain ol' good CSS. But you may want to give it a try, don’t you?
Let me translate the post from Rémi for you:
.grid > .cell > .item
. You can add specific classes if you need.The tricky part is 5. After checking at proposals submitted by various developers on Rémi’s post, it seems most of them didn’t catch that all grey rectangles should be the same width. Here is what you should be having:
Rémi made a CodePen to kickstart the riddle if you’d like to give it a try. Go on, have a shot. I’ll be waiting.
Spoilers! I’ll be giving the solution in a couple of lines or so, so if you don’t feel like knowing or are afraid of what you could read, close the tabs, close the browser, shut down your computer and lock yourself in!
The easiest (and best) solution was to use the calc
function. A few came up with tricky things like absolute positioning but that doesn’t seem like a good idea for a grid system.
When I shared my solution on Twitter, some people seemed kind of amazed at how far I pushed the use of calc()
. In the end I can assure you the solution is very easy to find, hence a little blog post to explain my thought process.
Many devs including myself jumped on the code assuming all cells would be the same width, obviously 25% since there are 4 cells per row. This was the first mistake, all cells don’t share the same width. Since all orange items are the same width (200px) and all grey spans are the same width (unknown) and some cells contain 2 grey spans while some contain only one, all cells can’t be the same width. Cells on sides are shorter than cells in the middle of a row.
Sometimes putting things on paper (well, in a text editor in my case) can help a lot to get things. Here is what I wrote:
orange | grey | margin | margin | grey | orange | grey | margin | margin | grey | orange | grey | margin | margin | grey | orange 200 | ? | 10 | 10 | ? | 200 | ? | 10 | 10 | ? | 200 | ? | 10 | 10 | ? | 200
This is what a row looks like.
?
, a grey span widthTo compute the width of a cell (could it be one from the sides or one from the middle) we need to find the width of grey spans (marked as ?
). This is actually quite easy to do, isn’t it? What do we know so far?
From this, we can easily pull out the space allowed for grey spans altogether: 100% - (200px * 4 + 10px * 6)
, or 100% - 860px
. To find the width of a single grey span, we only have to divide it per 6 since we have 6 grey rectangles per row. So: (100% - 860px) / 6
.
Obviously the computed value depends on the actual width of the viewport. On a 1240px
-large screen, it will result in 380px / 6
, or 63.333333333333336px
. Good!
From there it’s getting very easy. Side cells have a 200px wide inner item like every other cells but they only have one grey span instead of two since the orange item is stuck to the edge of the grid.
So the width is one orange item + one grey span or 200px + (100% - 860px) / 6
.
And the middle cells have two grey spans so their width is one orange item + two grey spans or 200px + ((100% - 860X) / 6) * 2
.
Now that we’ve computed everything on paper, we need to move all this stuff to the stylesheet. Thankfully CSS provides us the ultimate way to do cross-unit calculations: calc
. Even better, calc
is supported from IE9 so we only have to draw a quick and dirty fallback for IE8 and we’re good to go.
If you are using a templating engine (SPIP, Twig, Liquid…), there are high chances you generate your rows within a loop. This allows you to dynamically add a class to side-cells. Basically every multiples of 4, and every multiple of 4 + 1 (1, 4, 5, 8, 9, 12, 13, 16…).
But since we only have to support a reasonably recent range of browsers, we could use advanced CSS selectors as well like :nth-of-type()
to target side cells.
/* Side cells */
.cell:nth-of-type(4n), /* last cells */
.cell:nth-of-type(4n + 1) {
/* first cells */
/* Do something */
}
In the end, the core of the solution is no more than this:
/* Middle cells */
.cell {
width: calc(((100% - (200px * 4 + 10px * 6)) / 6) * 2 + 200px);
}
/* Side cells */
.cell:nth-of-type(4n),
.cell:nth-of-type(4n + 1) {
width: calc(((100% - (200px * 4 + 10px * 6)) / 6) + 200px);
}
You can have a look at the whole code directly on CodePen.
What’s interesting when you put things on paper before coding is you quickly become aware of what would be good stored in a variable. And if you’re using a CSS preprocessor, making this whole grid system working on no more than a couple of variables is within easy reach.
There are 3 things we could store:
Once you’ve set up those 3 variables, you don’t have to edit the code anymore whenever you want to change something, could it be the size of the margin or the number of cells per rows. Pretty neat.
Note: whenever you’re trying to use Sass variables in calc
function, be sure to escape them with #{}
. For instance: calc(#{$margin} + 42px)
.
Again, check code on CodePen.
That’s pretty much it folks. In the end it wasn’t that hard, was it? I feel like the most difficult part of a such an exercice is to leave the code aside and taking a couple minutes to actually understand what happens.
Too many developers including myself sometimes are too hurried to jump in the code and try things. When it comes to grid system, it turns out every single time I started coding stuff right away instead of taking a deep breath to get things right, I ended up rewriting most of my code.
And this, as you know it, sucks.
]]>Each color scheme is made up of a few secondary colors that are based on the key color. The secondary colors are generally simple variations on the key color. One is a little lighter, another has less saturation, another a slightly different hue... You get the idea.
Now Sass allows us to use tools like lighten()
and adjust-hue()
to programmatically generate the secondary colors that we need, but often the differences between the key color and the secondary colors are not simple transformations.
This got me thinking! What if we could calculate the mathematical relationship between two colors and use that calculation to generate colors of other themes?
Before we go too far, perhaps it would be a good idea to review how colors actually work in CSS. I've got an older article on my own website that gives a good overview of Colors in CSS. Go on. Have a look! I can wait.
Okay, ready now? So you've probably figured out that colors can be written using an HSL representation. HSL stands for Hue Saturation Lightness, the three main components of a color. According to Wikipedia:
HSL [is one of] the two most common cylindrical-coordinate representations of points in an RGB color model. HSL stands for hue, saturation, and lightness, and is often also called HLS. [T]he angle around the central vertical axis corresponds to "hue", the distance from the axis corresponds to "saturation", and the distance along the axis corresponds to "lightness", "value" or "brightness".
Hue is the base color that the color is derived from (red, green, blue...). Hue is defined based on the color wheel (given in degrees). Saturation defines if your color is bright or dull (given as a percentage). And lightness defines if you color is dark or light (also given as a percentage).
To figure out the color operations required to go from one color to another, we need to determine the individual components of the two colors. Thankfully we don't have to manually figure this out because Sass already provides functions to do just this: hue($color)
, saturation($color)
and lightness($color)
. These functions allow us to extract the individual components of a color.
To calculate the difference between two colors we need determine the differences between the individual components of each color:
$hue: hue($color-a) - hue($color-b);
$saturation: saturation($color-a) - saturation($color-b);
$lightness: lightness($color-a) - lightness($color-b);
As you can see, it is very easy to derive the differences between two colors in Sass. Now with these differences in hand we need to determine which functions we need to calculate $color-b
from $color-a
.
// Hue is easy, adjust-hue takes negative and positive params:
$function-hue: 'adjust-hue';
// If saturation diff is positive then desaturate, otherwise saturate
$function-saturation: if($saturation > 0, 'desaturate', 'saturate');
// If lightness diff is positive then darken, otherwise lighten
$function-lightness: if($lightness > 0, 'darken', 'lighten');
To wrap up our color-diff()
function we'll return a map of functions and value params. Maps are a new Sass 3.3 feature similar to a Hash in Ruby or an Object in JavaScript. It allows is to store keys and values:
@function color-diff($color-a, $color-b) {
$hue: hue($color-a) - hue($color-b);
$saturation: saturation($color-a) - saturation($color-b);
$lightness: lightness($color-a) - lightness($color-b);
$function-hue: 'adjust-hue';
$function-saturation: if($saturation > 0, 'desaturate', 'saturate');
$function-lightness: if($lightness > 0, 'darken', 'lighten');
@return (
#{$function-hue}: - ($hue),
#{$function-saturation}: abs($saturation),
#{$function-lightness}: abs($lightness)
);
}
If this looks a little odd to you, we are using Sass interpolation to return something that looks like this:
$map: (
'adjust-hue': -42deg,
'saturate': 13.37%,
'darken': 4.2%,
);
The keys are function names and values are the diff results. So the result of the color-diff()
function is a map of the operations to apply to $color-a
in order to get $color-b
. Now let's make sure it works as expected.
Checking whether our work is efficient is actually quite simple: we only have to apply those operations to color $color-a
and see if it returns color $color-b
. Of course we are not going to do it manually, that would be time consuming and error prone. Let's make an apply-color-diff()
function to alter a color with the diff returned from color-diff()
.
@function apply-color-diff($color, $diff) {
@each $key, $value in $diff {
$color: call($key, $color, $value);
}
@return $color;
}
So here's how apply-color-diff()
works.
$key
with two arguments: $color
and $value
The Sass 3.3 call($function, $param-1, $param-2...)
function makes this all possible. Call takes the name of a function in the form of a string and parameters to pass to the function. Here we are using it with our new color diff map to apply the functions in the map to the values.
Nothing better than a little example to make sure everything's right. Consider $color-a: #BADA55
and $color-b: #B0BCA7
. First, we run the color-diff()
function to get the diff.
$color-a: #bada55;
$color-b: #b0bca7;
$diff: color-diff($color-a, $color-b);
// (adjust-hue: 19.84962deg, desaturate: 50.70282%, lighten: 10.19608%)
Now we run apply-color-diff
on $color-a
with $diff
to check if $color-b == apply-color-diff($color-a, color-diff($color-a, $color-b))
.
$c: apply-color-diff($color-a, $diff);
// #B0BCA7
Victory! It works like a charm.
Now getting back to my original use case. I wanted to see if there was a way to consistently calculate the secondary colors for each theme with one calculation.
Using the color-diff()
function I can now see if there is a consistent mathematical relationship between the primary and secondary colors in each theme.
Using the function I get the following results:
$shopping: color-diff(#41cce4, #4f8daa);
// (adjust-hue: 10.28652deg, desaturate: 38.56902%, darken: 8.62745%)
$associations: color-diff(#ffa12c, #fb6e04);
// (adjust-hue: -7.52115deg, desaturate: 3.13725%, darken: 8.62745%)
$news: color-diff(#937ee1, #ad69ec);
// (adjust-hue: 18.41777deg, saturate: 15.25064%, darken: 1.96078%)
$ads: color-diff(#b1d360, #88a267);
// (adjust-hue: 8.70155deg, desaturate: 32.56861%, darken: 8.23529%)
Darn it! Since each color diff produces different results, I can't actually use this method on my project. There is no way to generate the precise secondary colors used in our design using this approach.
Even though I couldn't use the color-diff()
function in my project, I still found the whole exercise quite valuable. After all, I got a great blog post out of this! It's also been interesting to study how you can morph one color into another one.
What do you think of all this? Have you found interesting ways to morph and use color in your own projects?
I hope you've enjoyed this experiment! If you'd like to play with the code in this project, check out this CodePen. Cheers!
See the Pen Programmatically find one color from another one by Kitty Giraudel (@KittyGiraudel) on CodePen.
On a side note, Brandon Mathis has also worked on Color Hacker, a Compass extension providing some advanced color functions for dissecting your own color schemes.
]]>Something so simple as changing a stringified number into an integer is actually quite difficult to do in Sass, yet sometimes you might find yourself in the need of doing that (which means there is probably something wrong somewhere in your code by the way).
Sass provides a few types:
true
or false
)Let’s see how we can cast a value to another data type.
Update: I just released SassyCast, also available as an eponym Compass extension.
Casting to a string has to be the easiest type of all thanks to the brand new inspect
function from Sass 3.3 which does exactly that: casting to string.
@function to-string($value) {
@return inspect($value);
}
It works with anything, even lists and maps. However it does some color conversions (hsl being converted to rgb and things like that) so if it’s important for you that the result of to-string
is precisely the same as the input, you might want to opt for a proof quoting function instead. Same if you are running Sass 3.2 which doesn’t support inspect
.
Another way to cast to string without quoting is adding an unquoted empty string to the value like this $value + unquote("")
however it has two pitfalls:
null
: throws Invalid null operation: "null plus """.
"(a: 1, b: 2) isn’t a valid CSS value."
I have already written an article about how to convert a stringified number into an actual number, even if it has a CSS unit in this article.
I feel like the function could be improved to accept a boolean to be converted into 0
or 1
and things like that but that’s mostly optimization at this point.
Converting a value to a boolean is both simple and tricky. On the whole, the operation is quite easy because Sass does most of the work by evaluating a value to a boolean when in an @if
/@else if
directive. Meanwhile, there are some values that Sass considers as true
while they are generally refered as false
.
@function to-bool($value) {
@return not ($value or $value == '' or $value == 0 or $value == ());
}
Note how we have to manually check for ""
, ()
and 0
because both evaluate to true
in Sass.
to-bool(0) // false
to-bool(false) // false
to-bool(null) // false
to-bool("") // false
to-bool(()) // false
to-bool(1) // true
to-bool(true) // true
to-bool("abc") // true
to-bool(0 1 2) // true
to-bool((a: 1, b: 2) // true
We needed to be able to convert a stringified color into a real color for SassyJSON and we succeeded in doing so without too much troubles. Since we can’t build an hexadecimal color from the #
symbol (because it would result in a string), we went with the rgb()
for hexadecimal colors.
Basically we parse the triplet, convert each of its three parts from hexadecimal to decimal and run them through the rgb
function to have a color. Not very short but does the trick!
I’ll let you have a look at the files from our repo if you’re interested in casting a string to a color.
Technically, Sass treats all values as single-item lists so in a way, your value is already a list even if it doesn’t have an explicit list
type. Indeed, you can test its length with length
, add new values to it with append
and so on. That being said, if you still want to have a list
data type anyway there is a very simple way in Sass 3.3 to do so:
@function to-list($value) {
@return if(type-of($value) != list, ($value,), $value);
}
No, there is no typo in this code snippet. It’s really returning ($value,)
, which is basically a singleton. Starting from Sass 3.3, both lists and maps accept trailing commas and since it’s not the braces but the delimiter which makes a list, returning $value,
returns a list anyway.
If you are running Sass 3.2 and still want to create a singleton, there is a way which is actually kind of clever if you ask me:
@function to-list($args...) {
@return append((), $args);
}
Converting a single value to a map doesn’t make much sense since a map is a key/value pair while a value is, well, a value. So in order to cast a value to map, we would have to invent a key to associate the value to. In a matter of simplicity, we can go with 1
but is it obvious? We could also use the unique-id()
function or something. Anyway, here is the main picture:
@function to-map($value) {
@return if(type-of($value) != map, (1: $value), $value);
}
Feel free to replace 1
with whatever makes you feel happy.
to-map("string") // (1: "string")
to-map(1337) // (1: 1337)
Well, I don’t think there is such a thing as casting to null. In JavaScript, typeof null
returns an object (…) but in Sass there is a null
type which has a single value bound to it: null
. So casting to null is the same as returning null
. Pointless.
While we can find hacks and tricks to convert values from one type to another, I’d advise against doing so. By doing this, you are moving too much logic inside your stylesheet. More importantly, there is no good reason to cast a value in most cases.
In any case, I think it’s interesting to know how we can do such things. By tinkering around the syntax, we get to know it better and get more comfortable when it comes to do simple things.
]]>While the idea is solid, the realization is very simple. There was no CSS magic behind it at all. Les James (the author) manually wrote some JSON in the content
property of body’s ::before
pseudo-element, like this:
body::before {
display: none;
content: '{ "current": "small", "all": ["small"] }';
}
Well, you have to tell it is actually kind of cool to be able to do so, right? This is neat! Well fasten your belt people because Fabrice Weinberg and I pushed this to an upper level.
Fabrice and I recently released SassyJSON, a Sass-powered API to communicate with JavaScript through JSON. Basically it’s json-decode
and json-encode
in Sass.
Why, you ask? Well, I guess that could be useful at some point. With maps coming up in Sass 3.3, we are about to have real structured data in Sass. It will soon be very easy to have a config object (understand a map) or a media-query handler (a map again). Being able to encode those objects to JSON and move them out of the stylesheet opens us to a lot of new horizons. I’ll leave you the only judge of what you’ll do with this.
On my side, I already found a usecase. You may know Bootcamp, a Jasmine-like testing framework made in Sass for Sass by James Kyle (with a Grunt port). I am using Bootcamp for SassyLists. I am using Bootcamp for SassyMatrix. We are using Bootcamp for SassyJSON. This makes sure our Sass code is clean and efficient.
Back to my point: Bootcamp 2 (work in progress) will use maps to handle test results. Encoding this map to JSON makes it easy to parse it with JavaScript in order to make a real page for tests result, kind of like Jasmine SpecRunner. This is cool. Picture it people:
How awesome is that?
Writing the json-encode
part has been very easy to do. It took us less than an hour to have everything set up. We are able to encode properly any Sass type to JSON, including lists and maps. We have a json-encode
function delaying the encoding to type-specific private functions like _json-encode--string
, _json-encode--list
thanks to the brand new call
function from Sass 3.3:
@function json-encode($value) {
$type: type-of($value); /* 1 */
@if function_exists('_json-encode--#{$type}') {
/* 2 */
@return call('_json-encode--#{$type}', $value); /* 3 */
}
@warn "Unknown type for #{$value} (#{$type}) ."; /* 4 */
@return false; /* 4 */
}
Here is what’s going on:
_json-encode--#{$type}
where #{$type}
is the type of the valuecall
by passing it the value as parameterWe are very glad to be able to do clever stuff like this thanks to Sass 3.3 new functions. It looks both neat and clean, doesn’t it? Otherwise all functions are pretty straight-forward. Really, writing the encoding part has been easy as pie.
Once you’ve encoded your Sass into JSON, you’ll want to dump the JSON string into the CSS so that you can access it on the other side. There are several possibilities to dump a string into CSS without messing things up:
content
property of a pseudo-element (::after
and ::before
)font-family
property, preferably on an used element (e.g. head
)/*!*/
)Since we don’t like to choose, we picked all of them. We simply made a mixin with a flag as a parameter defining the type of output you’ll get: regular
for option 1 and 2 (cross-browser mess), media
for the media query and comment
for the comment or even all
for all of them (which is the default). Judge for yourselves:
$map: (
(a: (1 2 (b : 1)), b: (#444444, false, (a: 1, b: test)), c: (2 3 4 string))
);
@include json-encode($map, $flag: all);
/*! json-encode: '{"a": [1, 2, {"b": 1}], "b": ["#444444", false, {"a": 1, "b": "test"}], "c": [2, 3, 4, "string"]}' */
body::before {
display: none !important;
content: '{"a": [1, 2, {"b": 1}], "b": ["#444444", false, {"a": 1, "b": "test"}], "c": [2, 3, 4, "string"]}';
}
head {
font-family: '{"a": [1, 2, {"b": 1}], "b": ["#444444", false, {"a": 1, "b": "test"}], "c": [2, 3, 4, "string"]}';
}
@media -json-encode {
json {
json: '{"a": [1, 2, {"b": 1}], "b": ["#444444", false, {"a": 1, "b": "test"}], "c": [2, 3, 4, "string"]}';
}
}
Meanwhile json-decode
has been a pain in the ass to write, so much that I was very close to give up. Between nested lists, maps, null values, falsy values and hundreds of other tricky cases it is probably one of the hardest thing I’ve ever done in Sass.
One of the main problem we faced was the ability to retrieve numbers and colors. You see, when you parse a string, everything is a string. Even if you now this part is a number and this part is a boolean, when you slice your string all you have is shorter strings. Not numbers and booleans.
And this is a big deal, because when you use those tiny bits of decoded JSON in your Sass, types matter. If you go 42px * 2
but 42px
is actually a string
and not a number
as it should be, then your code breaks and Sass is furious and you are sad. Hence this article about casting a string into a number in Sass.
It took me 3 completely different tries before I come up with something that actually succeeds in parsing JSON. Frankly I was about to give up after the 2nd one because I had absolutely no idea how to do this efficiently. Just in case, I started searching for algorithms like how to build one’s own JSON parser or something.
I ended up in an obscure StackOverflow thread pointing to JSON parser implementations by browser vendors. Chrome’s one was impossible for me to understand, so I gave a shot at Mozilla’s and it looked actually understandable! Mozilla is using Java for their JSON parser, and their code is quite simple to catch up even for someone with absolutely no experience with Java at all (a.k.a. me).
So I followed the Fox' steps and began implementing it approximately like they did. Breaking news folks: Sass and Java are two very different languages. I had to be creative for some stuff because it was simply impossible to do it their way (number casting, anyone?).
Anyway, the main idea is the following:
json-decode
on a JSON string_json-decode--value
__json-decode--number
json-decode
As I said, the Fox implemented it as a Java class. Among other things, it means this class can have private properties to keep track of some global values. Well I don’t. At first, I used a couple of global variables for $position
(the pointer position), $source
(the JSON string), $length
(the length of the string) to make my code very close to the Java implementation. Indeed, none of my functions required any argument to work, using everything from the global scope.
This was kind of dirty. I didn’t want the parser to rely on global variables and Fabrice wasn’t very satisfied either. So I moved everything back into the functions. This wasn’t an easy move because suddenly I had to pass the pointer from a function to another, from the beginning of the parsing until the very end. And since most functions do return a result, I had to return a list of two element where the first would be the pointer, and the second would be the actual result: ($pointer, $result)
. Messy but it did the trick.
Almost nothing. I am very proud with what we have come up with. The only thing missing from our parser is the ability to detect special characters: \"
, \\
, \/
, \b
, \f
, \t
and \u
. We found a way for \n
and \r
and \"
but that’s pretty much it. I’m not sure we will be able to parse them all, but we need to dig deeper into it before determining.
Otherwise, I think we are good. We have already done almost 500 simple tests to cover all basic usages of JSON. Now, we are likely to find edge cases like weird encoding, a space at the wrong place and so on…
Also, I’d like to be able to cover every case of invalid JSON with a false
return along with an error message in the console. I don’t want to have a compiler error whenever the JSON string is invalid: this is dirty. To find all the error cases, I need tests. And if you feel like helping you testing it, you’d be more than welcome.
On the performance side, I suppose we could always do better. We try to make the code as fast as possible but it’s not easy when you nest multiple level of functions and loops. I am thinking of using some kind of cache system like Memo for SassyMaps by Snugug. We’ll see.
That’s pretty much it folks. We hope you like it! It’s been one hell of a thing to do and we’re glad to have made it through. Comments and suggestions are obviously welcome!
If you want to test SassyJSON, you’ll be pleased to know it’s available on npm or as Ruby Gem. We also asked SassMeister to support it so you should soon be able to play with it directly on SassMeister.
]]>I have to say I am pretty proud with what I have come up with. Not only does it work, but it is also very simple and from what I can tell quite efficient. This may be a bit slower for very large numbers but even there I’m not sure we can feel the difference in compilation time. It also lacks of support for very scientific notation like e
but that’s no big deal for now.
As I said, the function is actually simple. It relies on parsing the string character after character in order to map them to actual numbers. Then once you have numbers — well — you can do pretty much any thing. Let’s start with the skeleton, shall we?
@function number($string) {
// Matrices
$strings: '0' '1' '2' '3' '4' '5' '6' '7' '8' '9';
$numbers: 0 1 2 3 4 5 6 7 8 9;
// Result
$result: 0;
// Looping through all characters
@for $i from 1 through str-length($string) {
// Do magic
}
@return $result;
}
I think you can see where this is going. Now let’s have a look at what happens inside the loop:
@for $i from 1 through str-length($string) {
$character: str-slice($string, $i, $i);
$index: index($strings, $character);
@if not $index {
@warn "Unknown character `#{$character}`.";
@return false;
}
$number: nth($numbers, $index);
$result: $result * 10 + $number;
}
And this is enough to cast any positive integer from a string. But wait! What about negative integers? Plus I told you number
, not integer
. Let’s continue the journey!
Dealing with negative numbers is very easy: if we spot a dash (-
) as a first character, then it’s a negative number. Thus, all we have to do is to multiply $result
by -1
(as soon as $result
isn’t 0
).
@function number($string) {
// …
$result: 0;
$minus: false;
@for $i from 1 through str-length($string) {
// …
@if $character == '-' {
$minus: true;
}
@else {
// …
$result: $result * 10 + $number;
}
@return if($minus, $result * -1, $result);
}
As I said, it is pretty straight forward.
Making sure we can convert floats and doubles took me a couple of minutes. I couldn’t find a way to deal with numbers once the decimal dot has been found. I always ended up with a completely wrong result until I find a tricky way.
@function number($string) {
// …
$result: 0;
$divider: 0;
@for $i from 1 through str-length($string) {
// …
@if $character == '-' {
// …
} @else if $character == '.' {
$divider: 1;
} @else {
// …
// Decimal dot hasn’t been found yet
@if $divider == 0 {
$result: $result * 10;
}
// Decimal dot has been found
@else {
// Move the decimal dot to the left
$divider: $divider * 10;
$number: $number / $divider;
}
$result: $result + $number;
}
}
@return if($minus, $result * -1, $result);
}
Since it can be a little tricky to understand, let’s try with a quick example. Here is what happen when we try to cast "13.37" to a number:
$divider
and $result
variables to 0
"1"
gets found
$divider
is 0
so $result
gets multiplied by 10
(still 0
)1
gets added to $result
(now 1
)"3"
gets found
$divider
is 0
so $result
gets multiplied by 10
(now 10
)3
gets added to $result
(now 13
)"."
gets found
$divider
is now set to 1
"3"
gets found
$divider
is greater than 0
so it gets multiplied by 10
(now 10
)3
gets divided by $divider
(now 0.3
)0.3
gets added to $result
(now 13.3
)"7"
gets found
$divider
is greater than 0
so it gets multiplied by 10
(now 100
)7
gets divided by $divider
(now 0.07
)0.07
gets added to $result
(now 13.37
)All we have left is the ability to retrieve the correct unit from the string and returning the length. At first I thought it would be hard to do, but it turned out to be very easy. I moved this to a second function to keep things clean but you could probably merge both functions.
First we need to get the unit as a string. It’s basically the string starting from the first not-numeric character. In "42px"
, it would be "px"
. We only need to slightly tweak our function to get this.
@function number($string) {
// …
@for $i from 1 through str-length($string) {
// …
@if $char == '-' {
// …
} @else if $char == '.' {
// …
} @else {
@if not $index {
$result: if($minus, $result * -1, $result);
@return _length($result, str-slice($string, $i));
}
// …
}
}
// …
}
If we come to find a character that is neither -
, nor .
nor a number, it means we are moving onto the unit. Then we can return the result of the _length
function.
@function _length($number, $unit) {
$strings: 'px' 'cm' 'mm' '%' 'ch' 'pica' 'in' 'em' 'rem' 'pt' 'pc' 'ex' 'vw'
'vh' 'vmin' 'vmax';
$units: 1px 1cm 1mm 1% 1ch 1pica 1in 1em 1rem 1pt 1pc 1ex 1vw 1vh 1vmin 1vmax;
$index: index($strings, $unit);
@if not $index {
@warn "Unknown unit `#{$unit}`.";
@return false;
}
@return $number * nth($units, $index);
}
The idea is the same as for the number
function. We retrieve the string in the $strings
list in order to map it to an actual CSS length from the $units
list, then we return the product of $number
and the length. If the unit doesn’t exist, we simply return false.
If you want to play with the code or the function, you can check it on SassMeister. In any case, here are a couple of examples of our awesome little function:
sass {
cast: number('-15'); // -15
cast: number('-1'); // -1
cast: number('-.5'); // -.5
cast: number('-0'); // 0
cast: number('0'); // 0
case: number('.10'); // 0.1
cast: number('1'); // 1
cast: number('1.5'); // 1.5
cast: number('10.'); // 10
cast: number('12.380'); // 12.38
cast: number('42'); // 42
cast: number('1337'); // 1337
cast: number('-10px'); // -10px
cast: number('20em'); // 20em
cast: number('30ch'); // 30ch
cast: number('1fail'); // Error
cast: number('string'); // Error
}
So people, what do you think? Pretty cool isn’t it? I’d be glad to see what you could be using this for so if you ever come up with a usecase, be sure to share. ;)
Oh by the way if you need to cast a number into a string, it is nothing easier than $number + unquote("")
.
So I thought I’d give it ago. Since I managed to have a decent result in a matter of minutes and I really enjoyed working on this little thing, here is an explanation of the code.
We will need a couple of string functions that are not currently available in Sass but will in Sass 3.3 (which should be released in January according to this post by Nex3).
str-length
: like length
but for stringsstr-slice
: slicing a string from index A to index Bstr-insert
: insert a string in a string at index A`str-index
: finds first occurence of string in stringto_lower_case
: move a whole string to lower caseYou can find the Ruby source code for those functions in this file. I don’t do any Ruby, but the code is well documented so it’s really easy to understand what’s going on.
str-replace
functionLet’s start with the skeleton:
@function str-replace($string, $old, $new) {
// Doing magic
@return $string;
}
First things first, we need to check if the $string
actually contains $old
. If it doesn’t, well there is nothing we can do and we can do! For this, we’ll use the str-index
function which returns either the index at which the first occurrence of $old
has been found starting or 0
if $old
hasn’t been found.
@function str-replace($string, $old, $new) {
$index: str-index($string, $old);
@if $index > 0 and $new != $old {
// Doing magic
}
@return $string;
}
Note how we also make sure the $new
string is different from the $old
one. Obviously there is nothing to replace if both are the same! Now let’s dig into the core of our function. The first thing we need to do is to remove the $old
string from the $string
. To do this, we don’t have any other choice than recreating a new string by looping through each character of the string and not appending the one from $old
. Because performance matters, we can start looping from $index
instead of 1
.
$new-string: quote(str-slice($string, 1, $index - 1));
@for $i from $index through str-length($string) {
@if $i < $index or $i >= $index + str-length($old) {
$new-string: $new-string + str-slice($string, $i, $i);
}
}
So we start by initializing the $new-string
with the beginning of the $string
, from the first character to the one right before $index
(the start of $old
). Then we loop through each character in the string, and append them to the new string only if they are not part of the $old
occurrence.
Now that we’ve remove the old string, we need to append the new one. Couldn’t be easier with the str-insert
function.
$new-string: str-insert($new-string, $new, $index);
Done. Now what if there were multiple occurrences of $old
in the string? The easiest way is to go recursive.
@return str-replace($new-string, $old, $new);
The function will run once again. If $old
is found again, then it will deal with it as we just did for the first occurrence and go recursive again until there is no more occurrence of $old
in the string. And when there is none, we don’t satisfy the @if $index > 0
anymore so we just return $string
. End of story.
When you build such functions, it is always nice to handle edge cases like wrong arguments or things like this. You might know that the function requires a string for each argument to work but the end user might do something weird with it, like trying to replace a string by a number or something.
You usually put those kind of verifications at the top of the function in order to warn the user that something is wrong before doing anything else. Thankfully, Sass provides the @warn
directive that allows you to display a message in the console. Beware, this directive doesn’t prevent the function from running so you might want to couple it with a @return
.
@function str-replace($string, $old, $new) {
@if type-of($string) != string or type-of($old) != string or type-of($new) != string {
@warn "One of the 3 arguments is not a string.";
@return $string;
}
// Doing magic
}
Because of the way we handle this function, we go recursive. That means if you include the $old
string in the $new
string, you can create an infinite loop and make the whole universe collapse. That wouldn’t be pretty; let’s warn the user.
@function str-replace($string, $old, $new) {
@if str-index($new, $old) != 0 {
@warn "The string to be replaced is contained in the new string. Infinite recursion avoided.";
@return $string;
}
// Doing magic
}
Last thing we can do to make our function even better is dealing with giving a way to enable case sensitivity. Simplest way to do so is to add another parameter to the function, let’s say a $case-sensitive
boolean. Since str-index
is case-sensitive by default, we’ll set $case-sensitive
to true.
What we could do to allow case insentivity (when $case-sensitive
is set to false
) is to turn both the $old
and the $string
into lower case (or uppercase, whatever) to see if it finds anything. To do so, we only have to change the $index
assignment:
$index: if(
not $case-sensitive,
str-index(to-lower-case($string), to-lower-case($old)),
str-index($string, $old)
);
This doesn’t change the initial string at all, it just performs a search without being case sensitive. Easy peasy!
To be perfectly honest with you, I don’t yet have a use-case for this but I am sure there will be. String replacement is kind of a key feature as soon as you start playing with strings so if you ever come up with the need to replace a string into another string by another another string; think of me and tell me what was the usecase. ;)
]]>So. This post is kind of a “me” post in a way I’ll mostly talk about what I’ve done in 2013 (web-related) and what I’d like to do in 2014. Let’s take this as an opportunity for all to look back at 2013 and see what we’ve done, shall we?
After several years of school — the last two being in a work-based study (half part school, half part work) — I got hired at Tootici (Moirans, France) as a frontend developer as a full-time job.
Basically we create a platform aiming at promoting local proximity businesses and I am the one in charge of the frontend rendering. On the menu: HTML templating with Twig, CSS architecture with Sass & Compass, some JavaScript as well as a little bit of PHP on the Symfony 2 framework. Very interesting work in a very cool team so big deal for me. :)
Actually 2013 is the year I attented a conf for the first time at all. Attending to a conference was a great experience; being able to be there as a speaker was even more awesome. Last june I had the opportunity to talk about Sass at KiwiParty (Strasbourg, France) in front of a very receptive audience.
More than that, I could meet all those cool folks from Twitter for real this time, including the whole Alsacreations team led by Raphaël Goetter. Many thanks to them, the event was wonderful. I hope to be able to attend it in next June. :)
After about 3 years doing nothing else than CSS, I finally got interested in JavaScript. It’s about time! Actually I’ve come to a point with CSS where I don’t enjoy it as I used to. Don’t get me wrong: there are still topics I don’t fully understand in CSS (z-index
, flexbox
anyone?) but that doesn’t stop me from doing my job at — dare I say it — a quite honorable level.
Anyway in late 2013 I’ve started playing around JavaScript only to discover it is one hell of a language once you find a way to structure it. I think the reason I’ve been discouraged from JS so far is because it’s not that easy to organize. Things got better after I read this article from Chris Coyier about using the Literal Object pattern (incorrectly named “module pattern”).
Then things got even better once I started understanding what the prototype is, and how to use it. I’ve been practicing a little bit with this so far with Countdown.js and CRUD.js. Comments welcome. ;)
Browserhacks has been my first real project, with a purpose, issues, versions, features and so on. Launched in early 2013 with Tim Pietrusky we are very glad with what we’ve come up with so far (which is even more true since the fresh redesign from Joshua Hibbert).
We have many plans for the future of Browserhacks, especially since Fabrice Weinberg joined the core team when we moved over to a Grunt-based workflow. The only thing missing is time.
But worry not my friend, and stay tuned because we won’t drop this project that soon. We have too much to do to give up now. Oh and of course, if you feel like contributing…
If I were brave I’d say “let’s have a talk in English!”. The truth is I feel very uncomfortable when it comes to speak in English. Ironically enough I have written about 60 articles in English during 2013. One day I’ll be comfident enough to do it, but for now I think I’ll keep going with French talks.
However, I’d like to attend an English conference. My heart tends towards Smashing Conf which looks absolutely awesome and is not that far from where I live.
Actually getting good enough not to be ashamed to call myself a frontend developer would be a good start. Everytime I say I’m a “frontend dev” I always feel like that’s not entirely true because I am not a great JavaScript developer. Hopefully this will be fixed soon.
In any case, I hope 2014 will be the year I’ll keep doing what I like to do with such a passion. I think that’s the most important.
What about you people? Was 2013 a good year? What are your plans for 2014? :)
]]>In order to start making clean scripts and not poorly designed pieces of crappy jQuery dumped in the global object, I have revisited an old countdown script I made a while back with the object literal pattern.
There are like a billion scripts for countdowns, timers and clocks made of JavaScript. That’s like the “hello world!” of JS scripts so why making another one? Everything has been done yet!
Well, for one it was mostly about practicing. Making a timer script is something quite simple yet there is often lot of room for improvements. It turns out to be quite a nice playground to work in.
Secondly, I needed a script able to display a countdown in the way I like and not only hh:mm:ss
. I wanted to be able to display a sentence like There are still X days, Y hours and Z minutes left
or whatever. And since I didn’t know any script that allowed the use of patterns in a string ({days}
, {years}
…), I started building one.
It worked pretty well and the code was clean enough so that I wasn’t ashamed to release it on CodePen in early September. But I wanted to try something else than the litteral object pattern.
As good as this pattern can be, it becomes highly annoying when you have to deal with multiple occurrences of your widget on the same page. For some things, that’s not a problem at all. But you could definitely come with the need to display multiple timers/countdowns on the same page so I needed something moar.
So here comes Object Oriented JavaScript in all its glory!
Well, obviously you need to include the script in your page. But I made it pretty tiny plus it doesn’t have any requirement! It’s under 2Kb minified (which is about ~1.3Kb once gzipped).
<script src="js/countdown.js"></script>
Then using the countdown is as easy as instanciating the Countdown
class:
var countdown = new Countdown()
This creates a new instance with all defaults values but you can pass quite a few options:
selector
Default: .timer
The selector you want to inject Countdown into. It should be a valid string for document.querySelector()
.
dateStart
Default: new Date()
(now)
The date to start the countdown to. It should be a valid instance of class Date
dateEnd
Default: new Date(new Date().getTime() + (24 * 60 * 60 * 1000))
(tomorrow)
The date to end the countdown to. It should be a valid instance of class Date
msgBefore
Default: Be ready!
The message to display before reaching dateStart
msgAfter
Default: It’s over, sorry folks!
The message to display once reaching dateEnd
msgPattern
Default: {days} days, {hours} hours, {minutes} minutes and {seconds} seconds left
The message to display during the countdown where values between braces get replaced by actual numeric values. The possible patterns are:
{years}
{months}
{weeks}
{days}
{hours}
{minutes}
{seconds}
onStart
Default: null
The function to run whenever the countdown starts.
onEnd
Default: null
The function to run whenever the countdown stops.
var countdown = new Countdown({
selector: '#timer',
msgBefore: 'Will start at Christmas!',
msgAfter: 'Happy new year folks!',
msgPattern:
'{days} days, {hours} hours and {minutes} minutes before new year!',
dateStart: new Date('2013/12/25 12:00'),
dateEnd: new Date('Jan 1, 2014 12:00'),
onStart: function () {
console.log('Merry Christmas!')
},
onEnd: function () {
console.log('Happy New Year!')
},
})
The script doesn’t use jQuery at all, mostly because there is no need for such a library for this. However if you happen to use jQuery in your project, you’ll be glad to know the Countdown throws custom events on the element you’re binding the countdown to.
As of today, two events are being fired: countdownStart
and countdownEnd
. You can use them as follow:
var countdown = new Countdown({
selector: '.timer',
})
$('.timer').on('countdownStart', function () {
console.log('The countdown has been started.')
})
$('.timer').on('countdownEnd', function () {
console.log('The countdown has reached 0.')
})
Pretty neat, right?
My brother Loïc helped me pushing things further by adding a couple of things to the project on GitHub:
Thanks bro! Anyway, I’m proud to tell this script as passed strict JSHint validations and Jasmine tests! Hurray!
That’s all folks! I hope you like this script and if you find anything worth mentioning, please be sure to shoot in the comments or directly on the GitHub repo.
Oh and if you only want to hack around the code, check this pen:
See the Pen Object-oriented JS Countdown Class by Kitty Giraudel (@KittyGiraudel) on CodePen ]]>
However this is definitely something good to know so you might want to move on with the read.
I was working on Browserhacks pretty late very other night and just when I was about to turn everything off and go to bed, I runned the site on Google Chrome to “check that everything’s okay”.
And everything seemed okay until I noticed one deficient hack we added a couple of days earlier, aiming for Chrome 29+ and Opera 16+. My Chrome 31.0.1650.57 didn’t seem targeted so I removed the hack from our database and added a note about it on our GitHub repository. No big deal.
But just to be sure, I launched Firefox (Aurora) to make some tests and then the same phenomenum happened: I noticed a deficient hack. And then another one. And another one. And another one. And again. What the fuck? All of our 9 hacks supposed to target latest Firefox seemed to be completely pointless against Firefox Aurora. Either this browser has become bulletproof during its last releases, or there was a problem on our side. The latter the more plausible, unfortunately.
First thing odd: all the JavaScript hacks were fine; only the CSS one were miserably failing. So I started checking the stylesheet dedicated to the hacks (merged into `main.css but whatever) and everything seemed good. I double checked the call, I double checked the selectors, I double checked many little things but no luck. Everything seemed fine.
Whenever you’re getting desperate about a bug, you start doing very unlikely things in hopes of solving your issues. I’m no exception so I started debugging like a blind man.
First thing I tried was removing the very first hack from the test sheet because it has a very weird syntax that I suspected could break things apart:
.selector { (;property: value;); }
.selector { [;property: value;]; }
Pretty weird, right? Anyway that wasn’t the problem. Then I removed a second one that I knew could be an issue at some point: the collection of IE 7- hacks that rely on adding special characters at the beginning of the property:
.selector { !property: value; }
.selector { $property: value; }
.selector { &property: value; }
.selector { *property: value; }
.selector { )property: value; }
.selector { =property: value; }
.selector { %property: value; }
.selector { +property: value; }
.selector { @property: value; }
.selector { ,property: value; }
.selector { .property: value; }
.selector { /property: value; }
.selector { `property: value; }
.selector { [property: value; }
.selector { ]property: value; }
.selector { #property: value; }
.selector { ~property: value; }
.selector { ?property: value; }
.selector { :property: value; }
.selector { |property: value; }
Well… BINGO! No more issue and all the CSS hacks were working again. Now that I found the deficient hack, I had to figure out which line could make the whole world explode (well, kind of). Not much to do except trying to remove them one by one to find out this one was guilty:
.selector { [property: value; }
Most CSS parsers are made in a way that if a line is not recognized as valid CSS, it is simply skipped. Mr. Tab Atkins Jr. explains it very well in his article How CSS Handles Errors CSS:
CSS was written from the beginning to be very forgiving of errors. When the browser encounters something in a CSS file that it doesn’t understand, it does a very minimal freak-out, then continues on as soon as it can as if nothing bad had happened.
Thus, CSS is not a language where a missing semi-colon can prevent your site from working. At best (worst?), it will break your layout because the one line with the missing semi-colon and the one line after it would not be executed. From the same source:
If the browser is in trying to parse a declaration and it encounters something it doesn’t understand, it throws away the declaration, then seeks forward until it finds a semicolon that’s not inside of a {}, [], or () block.
This very last quote explains why this line is able to break your entire stylesheet. Basically, you open a bracket you never close. And while the browser has started parsing the opening bracket, it won’t do anything else before finding the closing one so every rules written after this hack won’t even be processed.
I made some tests with an opening parenthesis and an open brace as well: same result. If you open either {}
, []
or ()
in a property and don’t think about closing it, it will crash the whole stylesheet (actually everything after the hack, not before).
In the end I simply removed .selector { [property: value; }
from our hacks database so that it doesn’t harm anyone again. If you want to play around this glitch, simply have a look at this pen:
See the Pen The stylesheet breaker line by Kitty Giraudel (@KittyGiraudel) on CodePen
On a side note Sass, LESS and Stylus will all throw an error when encountering such a thing. In our case, we use Sass for everything but the hacks, for this very same reason: some hacks are not process-safe.
Anyway folks, that’s all I got. ;) Make sure you don’t have weird things in your stylesheets!
]]>Like… for real. There is no distinction in Sass between what you’d call a number (e.g. 42
) and what you’d call a length (e.g. 1337px
). In a sense, that makes sense (see what I did there?). You want to be able to do something like this:
$value: 42px;
@if $value > 10 {
// do something
}
You can do this just because lengths are treated as numbers. Else, you would have an error like "42px is not a number for 42px gt 10".
That being said…
42px == 42; // true
I can’t help but to grind my teeth when I see that the previous assertion returns true
. Yes, both are some kind of a number, but still… One has a unit and one does not. I don’t think the strict equality operator should return true for such a case.
Sometimes I wish Sass would make a distinction between ==
and ===
. As a reminder, the first one checks whether values are equal while the latter makes sure both are of the same type. This is to prevent something like 5 == '5'
from returning true
. When checking with ===
, it should return false
.
Anyway, every time you use ==
in Sass, it actually means ===
. So basically there is no way to check whether two values are equal without checking their type as well.
In most cases, this is really not an issue but I came up with a case where I didn’t want to check the type. Please have a look at the following example:
// Initializing an empty list
$list: ();
// Checking whether the list is true
$check: $list == true; // false, as expected
// Checking whether the list is false
$check: $list == false; // false
While we would expect an empty list to be false
, it turns out it is not. If it’s not false, then it’s true! Right? Seems not. An empty list is neither true nor false because ==
also checks for types. So the previous statement would look like something like this: [list] === [bool]
which is obviously false, no matter what the boolean is.
Okay so it makes sense that the previous example returns false
in both cases! Nevertheless, ()
being evaluated to false
would be quite cool when checking for a valid value to append to a list. Please consider the following code:
$list: (a, b, c);
$value: ();
@if $value {
// Short for `$value == true` which is the same as `$value != false`
$list: append($list, $value);
}
If ()
was treated as a falsy value, the condition wouldn’t match and the 4th element of $list
wouldn’t be an empty list. This is how it works in JavaScript:
var array = ['a', 'b', 'c']
var value = []
if (value != false) {
array.push(value)
}
This works because JavaScript makes a difference between !=
and !==
while Sass uses the latter no matter what.
We talked about the empty-list case in this section but there is the exact same problem with an empty string ""
or even the null
value. Anyway, as I said it’s barely an issue, but it has bugged me more than once.
Even after many articles about Sass lists, they keep surprising me with how messed up they are.
As you may know, most single-values in Sass are considered as one item-long lists. This is to allow the use of length()
, nth()
, index()
and more. Meanwhile, if you test the type of a single-value list, it won’t return list
but whatever the type is (could it be bool
, number
or string
). Quick example:
$value: (1337);
$type: type-of($value); // number
Indeed —as explained in this comment from Chris Eppstein — parens are not what define lists; it’s the delimiter (commas/spaces).
Now what if we append this value to an empty list? Let’s see.
$value: (1337);
$value: append((), $value);
$type: type-of($value); // list
Bazinga! Now that you appended the value to an empty list, the type is a list. To be totally honest with you, I am not entirely sure why this happens. I believe the append()
function returns a list no matter what, so if you append a single value to a list, it returns a list with a single item. That’s actually the only way I know to cast a single value into a string in Sass. Not that you’re going to need it, but that’s actually good to know!
Okay let’s put this straight: variable scope has always been my pet hate. I don’t know why, I always got it wrong. I believe variable scope in Sass is good, but for some reason it doesn’t always work the way I’d want it to work. I recall trying to help someone who wanted to do something like this:
// Initialize a variable
$color: tomato;
// Override it in an impossible @media directive
@media (min-width: 10000em), (-webkit-min-device-pixel-ratio: 42) {
$color: lightgreen;
}
// Use it
body {
background: $color; // lightgreen;
}
When I read it now, it seems obvious to me that the assignment in the @media
directive will override the first one. Indeed Sass is compiled to serve CSS, not evaluated on the fly. This means Sass has no idea whether the @media
will ever match and it doesn’t care. It simpy overrides the variable; there is no scoping involved here. But that would be cool, right?
Okay, let’s take another example with Sass scope in mixin directives shall we?
// Define a `$size` variable
$size: 1em;
// Define a mixin with an argument named `$size`
@mixin whatever($size: 0.5em) {
// Include the `@content` directive in the mixin core
@content;
margin-bottom: $size * 1.2;
}
// Use the mixin
el {
@include whatever {
font-size: $size;
}
}
I want to play a game. In your opinion, what is the CSS rendered by this code (shamelessly stolen from Mehdi Kabab's new book - “Advanced Sass and Compass”)?
The correct answer is:
el {
font-size: 1em;
margin-bottom: 0.6em;
}
This is actually not fucked up at all: it’s the expected behaviour from correct variable scoping. While it might look silly for an advanced Sass user, I bet it’s not that obvious to the beginner. The declared $size
variable is used for the font-size while the default value for the $size
argument is used for the bottom margin since it is inside the mixin, where the variable is scoped.
Since Sass 3.3, this is no longer a bug. It has been fixed.
You all know what a ternary is, right? Kind of a one-line if
/else
statement. It’s pretty cool when you need to assign a variable differently depending on a condition. In JavaScript, you’d write something like this:
var whatever = condition ? true : false
Where the first part would be an expression evaluating to a truthy or falsy value, and the other two parts can be whatever you want, not necessarily booleans. Okay, so technically there is no ternary operator in Sass (even if there is one in Ruby very similar to the one we just used). However there is a function called if()
which works the same way:
$whatever: if(condition, true, false);
First argument is the condition, second one is the value to return in case the condition is evaluated to true
and as you may guess the third one is returned when the condition is false. 'til then, no surprise.
Let’s have a try, shall we? Consider a function accepting a list as its only argument. It checks for its length and returns either the 2nd item if it has multiple items, or the only item if it has only one.
@function f($a) {
@return if(length($a) > 1, nth($a, 2), $a);
}
And this is how to use it:
$c: f(bazinga gloubiboulga);
// returns `gloubiboulga`
And now with a one-item long list:
$c: f(bazinga);
// List index is 2 but list is only 1 item long for `nth'
BAZINGA! The if()
function returns an error. It looks like it’s trying to access the second item in the list, even if the list is only one item long. Why you ask? Because the ternary function from Sass parses both 2nd and 3rd arguments no matter what.
Hopefully this issue is supposed to be solved in the incoming Sass 3.3 according to this GitHub issue. Meanwhile, a workaround would be to use a real @if/@else
statement to bypass the issue. Not ideal but still better than nothing.
I love how powerful Sass has become but there are things that keep boggling my mind. Mehdi Kabab, a fellow French developer (and author of a fresh new book called Advanced Sass and Compass) told me it was because I wasn’t using Sass as a preprocessor.
@KittyGiraudel the main problem is you want use Sass like PHP or Ruby, and not like a CSS preprocessor ;) /cc @kaelig
— Medhi Kabab, Twitter
That’s actually true! I’ve done many things with Sass that are really beycond the scope of CSS. But that’s where I think the fun is: thinking out of box, and hacking around the syntax. That’s how I learnt to use Sass, and that’s how I’ll keep going on. ;)
]]>But let’s put some context first: Twig presents itself as a template engine for PHP. Kind of Jekyll, but far more powerful. The basic idea is to create reusable templates also called “views” (basically HTML blocks) to avoid repeating the same code again and again.
Since not all of you are Twig masters (neither am I though), I am going to explain a couple of things before entering the topic.
Twig is mostly about extending templates (@extend
). Thus we start with setting up a base template outputing some HTML (<html>
, <head>
, <body>
…) and defining Twig blocks. Quick example:
<!-- base.html.twig -->
<!DOCTYPE html>
<html>
<head><!-- whatever --></head>
<body>
{% block header %}{% endblock %}
{% block main %}{% endblock %}
{% block footer %}{% endblock %}
</body>
</html>
When a second template extends from the first one, it can dump stuff into those blocks that will bubble up into the first one to finally output content. There is no maximum level of nesting for such a thing so you can do this as deep as you want. Let’s continue our example:
<!-- page.html.twig -->
{% extends 'base.html.twig' %}
{% block header %}
<h1>Title</h1>
{% endblock %}
{% block main %}
<p>My first page</p>
{% endblock %}
{% block footer %}
<footer>Credits & copyright</footer>
{% endblock %}
That’s pretty much how you work a project with Twig.
Now you also can also include files (@include
) which work has you would expect: this is basically the @include
from PHP. So if you have some static content, like a footer for example, you can include a partials (a bunch of HTML if you will) directly into your footer block like this:
{% block footer %}
{% include 'partials/footer.html.twig' %}
{% endblock %}
And finally, you can embed (@embed
) files which is more complex. Embeding is a mix between both extending and including. Basically it includes a template with the ability to make blocks bubbling down instead of up. We’ll come back to this.
The problem I faced at work was finding a way to manage both themes and layouts in Twig with themes being design schemes (mostly color-based) and layouts basically being the number of columns we use for the layout as well as their size.
So the theme is passed as a class to the body element (e.g. <body class="shopping">
), while the layout defines what kind of dom nodes / HTML classes we will use for the main content of the site.
We have half a dozen of themes — one per section of site — (shopping
, news
, admin
, regular
…) and 4 different layouts based on the 12-columns grid system from Bootstrap (12
for a full-width one-column template, 9-3
for two columns with a 3/1 ratio, 8-4
for a two columns with a 2/1 ratio and 2-7-3
for 3-columns).
Back to the issue: we had to be able to define both the theme and the layout on a page per page basis. Something like this:
<!-- This doesn’t work. -->
{% extends '@layout' %}
{% extends '@theme' %}
Unfortunately, it’s not possible to extend multiple templates in Twig (which seems obvious) so we had to find a workaround.
One possible way to go — the one we wanted to avoid at all costs — was having either every layouts for every themes, or every themes for every layouts. Basically something like this:
With this solution, you could do somethink like {% extends 'shopping/12' %}
. Or the other way around:
With this solution, you could do somethink like {% extends '12/shopping' %}
.
Both sucks. Really bad. It is not only very ugly but also a nightmare to maintain. Friends, don’t do this. This is not a good idea. Especially since Twig is the most powerful template engine out there: there is a better way.
After some searches, we finally found a way to do what we wanted with the embed
directive. As I said earlier, embed really comes in handy when trying to achieve complicated systems like this. From the official Twig documentation:
The embed tag combines the behaviour of include and extends. It allows you to include another template’s contents, just like include does. But it also allows you to override any block defined inside the included template, like when extending a template.
In the end, we need 4 files to create a page:
base.html.twig
which defines the page core and the major blocks{theme}.html.twig
with {theme}
being the name of the theme we want (e.g. shopping
) which extends base.html.twig
and defines the class for the body element (and if necessary some other theme-specific stuff){layout}.html.twig
with {layout}
being the layout we want (e.g. 9-3
), defining content blockspage.html.twig
which is the actual page, embeding the layout file in the main content to override its blocksThis may sound a bit complicated so why not doing this step by step, shall we?
As seen previously, the base file creates the HTML root document, the major HTML tags and defines the major Twig blocks, especially the one used to define the HTML class on the body element.
<!DOCTYPE html>
<html>
<head><!-- whatever --></head>
<body class="{% block theme %}default{% endblock %}">
{% block layout %}{% endblock %}
</body>
</html>
Next, we need to define a theme. A theme file will directly extends the base file, and will be extended by the page file. The content of the theme file is very light. Let’s say we have a shopping theme; so we have the shopping.html.twig
file:
{% extends 'base.html.twig' %}
{% block theme 'shopping' %}
The last line of this code example may look a little weird to you: it is the short way for {% block theme %}shopping{% endblock %}
. I like this way better when the content block is like a word or two without any HTML.
Anyway, when using this theme, the theme
block defined in base.html.twig
will be filled with shopping
, setting a shopping
class to the body element.
Let’s say our page will use the shopping theme we just created with a 2-columns layout with a 2/1 ratio. Right? As I said previously, I like to call my themes the way they work with columns so in this case: 9-3.html.twig
.
<div class="wrapper">
<div class="col-md-9 content">
{% block content %}{% endblock %}
</div>
<div class="col-md-3 sidebar">
{% block sidebar %}{% endblock %}
</div>
</div>
We only need the last piece of the puzzle: the page file. In this file, not much to do except dumping our content in the accurate blocks:
{% extends 'shopping.html.twig' %}
<!-- Filling the 'layout' block defined in base template -->
{% block layout %}
{% embed '9-3.html.twig' %}
{% block content %}
My awesome content
{% endblock%}
{% block sidebar %}
My sidebar content
{% endblock %}
{% endembed %}
{% endblock %}
<!DOCTYPE html>
<html>
<head><!-- whatever --></head>
<body class="shopping">
<div class="col-md-9 content">
My awesome content
</div>
<div class="col-md-3 sidebar">
My sidebar content
</div>
</body>
</html>
Voila! Pretty neat, right?
That’s pretty much it. From there, dealing with color schemes is quite simple since you have a specific class on the body element. To ease the pain of working out design schemes on the CSS-side, I use a couple of Sass mixins and a bunch of Sass variables. It makes everything fits in a couple of lines instead of large amount of vanilla CSS.
Long story short: Twig is really powerful and so is the embed directive.
]]>This reminded me that no so long ago, I was a fervent defender of the tell "Bootstrap is good for prototypes and back offices or stuff like this".
Until a recent project where I finally learnt to like Bootstrap, even for websites. But let’s back up a little bit!
I recently got hired for quite a big project as the only frontend developer in a team of a dozen of developers. The design itself is fairly complex since it involves various layouts, multiple themes, a lot of forms and a bunch of pages. Thankfully, Symfony 2 and its template engine Twig make it a lot easier to manage but that’s not the point.
So when I started working on this project, the project manager basically told me I would be the only one to deal with the front end which sounded great to me because other developers were mostly backend devs.
Kitty, we’ll use Bootstrap.
— NOOOOOO!
And then he told me what I didn’t want to hear: "we will use Twitter Bootstrap" and I was like "NOOOO!!".
But then he said something even worse: "Bootstrap 2.3" and then I was like "NOOOOOOOO!!" (note the number of O is increasing).
Since Bootstrap 3 was still in RC back then, it wasn’t possible for us to use it. Thankfully a couple of days later, it got officially released so we jumped onto it and moved the little frontend we had already done to v3.
At first, it was a pain in the ass for me to work with Bootstrap. Mostly because I haven’t ever used it before. Especially the grid system which didn’t feel intuitive to me: .container
, .row
, .col-md-*
? What is this?
But also because I thought my CSS skills were good enough so I don’t have to use some framework. And in a way, I was right: I don’t need a CSS framework to make a website. Now, even if I don’t need it doesn’t mean I shouldn’t use it at all.
It’s been a couple of weeks now we are working on this project and picking Bootstrap has to be one of the wisest moves we have taken so far.
This is the main reason that makes me like Bootstrap on this project: I can code really fast. Making a component displaying a row of product with their image, title and description takes me no more than a couple of minutes thanks to Bootstrap’s powerful grid system and its collection of components.
Also it provides a lot of helper classes like .pull-left
, .clearfix
and a good starter for responsiveness.
This heading can be confusing: I am not talking about LESS, the CSS preprocessor. I mean that using Bootstrap really reduces the number of dependencies used across a project.
Carousel? Check. No need of FancyJqueryAnythingCarouselSlider.js. Icon fonts? Check. No need of FontAwesome. Modal? Check. Dropdowns? Tabs? Tooltips? Check, check, check. It may sounds trivial, but not having thousands of dependencies is really important to keep things maintainable.
Of course we still have other dependencies than Bootstrap like jQuery UI (which could deserve a similar article I guess), underscore.js and quite a couple of other things but I can’t imagine the number of external dependencies we would have right now if we were not using Bootstrap.
I believe this whole "Bootstrap is evil" thing started shortly after Twitter Bootstrap 2.x came out. Many people started creating websites with nothing more than the default collection of composents without even trying to customize them or to find a special scheme.
At this point, every sites looked alike and it was kind of annoying for sure. But I feel like this time is over and now most Bootstrap powered sites are using it wisely, adding their own custom design on top of Bootstrap components. That’s what Bootstrap is: a backbone for the site.
In the end, I think I’ve changed my mind on Bootstrap and I really start to understand what it’s for. On big websites, having a skeleton to work on is important. It’s like managing a huge JavaScript structure without a library (could it be jQuery, MooTools, whatever).
Long story short: Bootstrap is not that bad. Just don’t use it raw. Cook it your way first.
]]>The following is a guest post by Christoph Rumpel, a passionate Web developer from Austria. I’m very glad to have him writing a motivational post here.
Here and now is really an exciting time as a web developer or web designer. You can find everything; you need to start your career on the web. There are articles, screencasts, podcasts and forum discussions on really every topic you can think of and this is why you can start being a web designer or developer right now. It is not necessary to have a degree in order to get a job like it is the case in other branches. It is all about what really counts: your work and your personality.
This is one of the thousand reasons why I love my job. What you see is what you get and the only thing you need to start is a computer and Internet access. Pants are not a must! Voila, here you go.
This isn’t something new to you, so where am I going? As I already mentioned this awesome basis we have is a great advantage for starting your career, but at the same time it can be the opposite.
People tend to think that their work is not good especially when they start learning and building something new. They aren’t satisfied with what they can accomplish. They are impatient alas for no reason. They avoid building real projects and hide what they do. And these are the two worst mistakes I can think of. Every beginning is hard, but this is the time where we learn the most and something we have to go through anyway. But who is telling you that your work is not good enough? It is only you!
Design is here and now.
I have run into this problem when I started learning HTML and CSS. Of course I was quickly able to build a simple website with what I have learned, but the result didn’t match my expectations; my wrong expectations. This is why I told myself I need more time to learn and that I am not good enough yet. Wrong wrong wrong! Unfortunately it took me some time to realize the issue. It was one of my design teachers at university who opened my mind when he said: "Start building things with your current skills now. If you can draw rectangles but no circles, draw something only with rectangles. Design is here and now!"
It sounds like something stupid but if you ask me, this is one of the most important lessons I have learned. You need to go out and build things right now. This is the best and only way to improve your skills and yourself too. There are projects waiting for you everywhere. There are always some friends who have this great little band and who would love to have a website and shirt designs. Or someone from your family needs a greeting card for a birthday. I swear it is more difficult to save yourself from too much work. But these are just some examples. It is ok if you just work for yourself too, but make sure to make projects as realistic as possible. Schedule it and write down the main goals in order to take the most out of it.
But there is more: please don’t be afraid to show what you have accomplished. There are simple reasons: 1) you should be proud about what you have done. You have created something new and I am damn sure you have learned a lot. That is awesome!; 2) this is the easiest way to get feedback. Learn from other opinions.
This isn’t just something you have to consider at the beginning of your career. These things are important to all of us and we are facing similar situations every day. I hope I could open your minds and you will stop being too critical with yourself, because there is no need to. Mx. Kitty themselves are a great example. They are working a lot and sharing most of their work with us too. This is a great benefit for us and for them. They told me that there are times too, when they think their work is not good enough. I can tell you / them that their work is definitely good enough and so is yours!
]]>The code explained in this article has been slightly revisited in the pen afterwards. For the ultimate version of the code, check the pen.
You know how much I love playing with Sass lists. I think they are the most powerful and useful feature in Sass. It’s a shame there is so few functions to deal with them. This is why I made SassyLists.
Most importantly, I always wanted a console.log()
for Sass. You know, something to debug a variable, a list, a value, whatever… There is the [@debug](https://sass-lang.com/documentation/file.SASS_REFERENCE.html#_4)
function but somehow it didn’t completely satisfy me. Plus, there is no console on CodePen and since this is where I do most of my experiments I needed something else.
So I rolled up my sleeves, got my hands dirty and made my own Sass debug function. This is how it looks like:
See the Pen Debug Sass lists by Kitty Giraudel (@KittyGiraudel) on CodePen
If you don’t want to read but simply want to dig into the code, check this pen.
Everything started when I realized a function to stringify a list. At first, my point was to turn a regular Sass list into a JSON-like string in order to be able to output it into a CSS pseudo-element.
It was pretty easy to do.
@function debug($list) {
// We open the bracket
$result: '[ ';
// For each item in list
@each $item in $list {
// We test its length
// If it’s more than one item long
@if length($item) > 1 {
// We deal with a nested list
$result: $result + debug($item);
}
// Else we append the item to $result
@else {
$result: $result + $item;
}
// If we are not dealing with the last item of the list
// We add a comma and a space
@if index($list, $item) != length($list) {
$result: $result + ', ';
}
}
// We close the bracket
// And return the string
$result: $result + ' ]';
@return quote($result);
}
This simple functions turns a Sass list into a readable string. It also deals with nested lists. Please have a look at the following example:
$list: a, b, c, d e f, g, h i, j;
body:before {
content: debug($list);
// [ a, b, c, [ d, e, f ], g, [ h, i ], j ]
}
Okay, this is pretty neat, right? However everytime I wanted to debug a list, I had to create a body:before
rule, set the content property and all… I wanted something easier.
Basically I wanted to go @include debug($list)
and have everything displayed. Perfect usecase for a mixin, right?
@mixin debug($list) {
body:before {
content: debug($list) !important;
display: block !important;
margin: 1em !important;
padding: 0.5em !important;
background: #efefef !important;
border: 1px solid #ddd !important;
border-radius: 0.2em !important;
color: #333 !important;
font: 0.75em/1.5 'Courier New', monospace !important;
text-shadow: 0 1px white !important;
white-space: pre-wrap !important;
}
}
In case you wonder, I bash !important
in case body:before
is already defined for something. Basically I force this pseudo-element to behave exactly how I want.
So. This mixin doesn’t do much more than styling the output of the debug
function. So now instead of having to open the body:before
rule, the content property and all, we just need to go @include debug($list)
.
Pretty neat, but I wanted moar.
I wanted two things: 1) explode the list into several lines to make it easier to read; 2) add the ability to display the type of each value in the list.
You might have stumbled upon my article Math sequences with Sass in which I explain how I created famous math sequences in Sass and how I managed to display them with nothing more than CSS. Anyway, I kind of answer the question of linebreaks in CSS.
If you’ve ever read the CSS specifications for the content property (don’t worry, neither did I), you may know that there is a way to insert breaklines with \A
(don’t forget the trailing white space). In the article, I used it as a $glue
for the to-string()
function from SassyLists.
This is pretty much what we will do here.
@function debug($list) {
$line-break: '\A ';
$result: '[ ' + $line-break;
@each $item in $list {
$result: $result + ' ';
@if length($item) > 1 {
$result: $result + debug($item);
} @else {
$result: $result + $item;
}
@if index($list, $item) != length($list) {
$result: $result + ', ' + $line-break;
}
}
$result: $result + $line-break + ']';
@return quote($result);
}
All we did was adding a line-break after the bracket, after each value, then before the closing bracket. That looks great, but we need to handle the indentation now. This is where it gets a little tricky.
Actually the only way I could manage a perfect indentation is the same trick I used for the to-string()
function: with an internal boolean to make a distinction between the root level (the one you called) and the inner levels (from nested lists). Problem with this boolean is it messes with the function signature but that’s the only way I found.
@function debug($list, $root: true) {
$line-break: '\A ';
$result: '[ ' + $line-break;
$space: if($root, '', ' ');
@each $item in $list {
$result: $result + ' ';
@if length($item) > 1 {
$result: $result + debug($item, false);
} @else {
$result: $result + $space + $item;
}
@if index($list, $item) != length($list) {
$result: $result + ', ' + $line-break;
}
}
$result: $result + $line-break + $space + ']';
@return quote($result);
}
The list should now be properly indented. So should be the nested lists. Okaaaay this is getting quite cool! We can now output a list in a clean var_dump()
way.
Now the icing on top of the cake would be displaying variable types, right? Thanks to the type-of()
function and some tweaks to our debug
function, it is actually quite simple to do. Far simpler than what we previously did with indents and line breaks.
@function debug($list, $type: false, $root: true) {
$line-break: "\A ";
$result: if($type,
"(list:#{length($list)})[ "+ $line-break,
"[ " + $line-break
);
$space: if($root,
"",
" "
);
@each $item in $list {
$result: $result + " ";
@if length($item) > 1 {
$result: $result + debug($item, $type, false);
}
@else {
$result: if($type,
$result + $space + "(" + type-of($item) + ") " + $item,
$result + $space + $item
);
}
@if index($list, $item) != length($list) {
$result: $result + ", " + $line-break;
}
}
$result: $result + $line-break + $space + "]");
@return quote($result);
}
As you can see, it is pretty much the same. We only check for the $type
boolean and add the value types accordingly wherever they belong. We’re almost there!
Note: I’ve set the $type
boolean to false
as a default for the debug
function but to true
for the mixin.
The only problem left is that if you debug a single value, it will wrap it into (list:1) [ … ]
. While this is true, it doesn’t really help the user so we should get rid of this. Fairly easy! We just have to add a condition when entering the function.
@function debug($list, $type: false, $root: true) {
@if length($list) == 1 {
@return if($type,
quote("(#{type-of($list)}) #{$list}"),
quote($list)
);
}
…
}
That’s pretty much it people. I hope you like it. This has been added to SassyLists, so if you think of something to improve it be sure to share!
Some of you might find this kind of overkill. Then you can try this @debug
-powered version by Mehdi Kabab that does pretty much the same thing but in the Ruby console.
I wrote this article months ago when I was first experimenting with Sass 3.3 alpha features. I came up with a pretty wild solution to generate a random number in Sass. However it looks like Sass 3.3 will implement a random function so we won’t need all this stuff. I still publish it for fun. :)
Everything started when I was spying on Sass 3.3 source code on GitHub for my article about the future of Sass at David Walsh' Blog. I was sniffing the incoming functions when all of the sudden I came by a unique-id()
function.
According to the issue which started this idea, the unique-id()
function should return a unique random alphanumeric identifier that could be used for whatever you like. As far as I understood the example provided by Chris Eppstein, it could be used to dynamically generate and extend a placeholder from within a mixin. Kind of complicated stuff, really.
Anyway, I saw this unique id thingie as an opportunity to have a random number with Sass. Why? I don’t know. I leave this question to you. Maybe some day I’ll find a usecase for a random number in CSS.
Note: the code in this article has not been tested at all since it requires some Sass 3.3 functions that are not implemented yet. This is more like a proof of concept.
unique-id()
To understand what this is all about, you need to know what the unique-id()
is and what it returns. First of all, there are two different functions for this in Sass source code, both from 2 months ago: one in tree f3be0f40b7
(using base36) and one in branch unique_id
(using base16). I only worked on the latter since it’s most likely this is the one that will be implemented.
I’m not a Ruby pro, but with the help of a kind folk on Twitter, I could make it work on CodePad. Here is what a couple of run of the function looks like:
u84ec5b4cdecd4299
u871ec9c6e6049323
u8865b8a8e572e4e8
u85f6c40bb775eff2
u8868f6a1f716d29f
u89cf1fa575a7a765
u89184d7511933cd3
u8a7287c699a82902
u8547f4133644af4c
u86fb16af4800d46b
So the function returns a 19-characters long alphanumeric string. As you may have noticed, the returned string always starts with a u
. This is actually hard-coded inside the function core to make sure the string always start with a letter in order to be able to be used as a class / placeholder / id, whatever.
To put it very simple, the function randoms a 19-digits number, convert it to base 16 (or base 36 in the other implementation), then append it a u
. So when we use unique-id()
, we end up with something like this: u8547f4133644af4c
.
My first attempt to get a random number from this string was to remove all alpha characters from it, then keep only the number of digits we want (or we still have). To do this, I used the incoming string manipulation functions (str-length()
, str-slice()
, str-insert()
):
@function rand($digits: 16) {
/* Array of characters to remove */
$letters: a b c d e f u;
$result: unquote('');
$string: unique-id();
/* For each character in the given string */
@for $i from 1 through str-length($string) {
/* Isolate character */
$character: str-slice($string, $i, $i + 1);
/* If not a letter */
@if index($character, $letters) == false {
/* Append it to $value */
$value: str-insert($result, $character, str-length($result) + 1);
}
}
/* Deal with the number of digits asked */
@if $digits !== 0 and $digits < length($result) {
$result: str-slice($result, 1, $digits);
}
/* Return the result */
@return $result;
}
I think the code is pretty much self-explanatory. I check each character individually: if it’s not a letter, I append it to the $result
variable. When I’m done, if the length of $result
is still greater than the number of digits we asked for ($digits
) we truncate it.
And there we have a random number between 1 and 9999999999999999 (in case the 16 characters are 9).
$number: rand(); /* Random between 1 and 9999999999999999 */
$number: rand(1); /* Random between 1 and 9 */
$number: rand(4); /* Random between 1 and 9999 */
$number: rand(0); /* Random between 1 and 9999999999999999 */
$number: rand(-1); /* Random between 1 and 9999999999999999 */
Okay, let’s say it: the first version I came with is really dirty. That’s why I reworked a new version from scratch with the help of my brother. We even tweaked it in order to make it future-proof for both implementations of the unique-id()
function. How cool is that?
To put it simple, instead of stripping alpha characters, we take the alphanumeric string and convert it back into an integer. Then, we get a fully random integer we simply have to manipulate around min and max values.
@function rand($min: 0, $max: 100) {
$str: str-slice(unique-id(), 2);
$res: toInt($str, 16);
@return ($res % ($max - $min)) + $min;
}
The first line in the function core is the unique-id()
function call. We immediately pass it into the str-slice()
function to remove the very first character which is always a u
.
Note: According to my tests, the min value used in both implementations of unique-id()
is such that the second character of the returned string is always the same (8
in base 16, 1
in base 36). Thus we may need to strip it too, like this str-slice(unique-id(), 3)
.
The second line calls a toInt()
function, passing it both the string ($str
) and the base we want to convert the string from (not to). This is why I say we’re ready for both implementations: we only have to change this 16
to 36
and everything should work like a charm.
Before going to the last line, let’s have a look at the toInt
function:
@function toInt($str, $base: 10) {
$res: 0;
$chars: charsFromBase($base);
@if $chars !== false {
$str: if($base < 64, to-lower-case($str), $str);
@for $i from 1 through str-length($str) {
$char: str-slice($str, $i, $i + 1);
$charVal: index($char, $chars) - 1;
$res: $res + pow(length($base), str-length($str) - $i) * $charVal;
}
@return $res;
}
@return false;
}
$res
will store the result we will return once we’re done. $chars
contains the array of characters used by base $base
; we’ll see the charsFromBase()
function right after. Then, if the base is supported we loop through each characters of the string.
For every character, we isolate it ($char
) and convert it to its numeric equivalent ($charVal
) thanks to the $chars
array. Then, we multiply this number to the base raised to the reversed index in the string. That may sound a little complicated, let me rephrase it: in base 10, 426
equals 4*10^2
+ 2*10^1
+ 6*10^0
. That’s pretty much what we do here, except instead of 10
we use the base, and instead of 2
, 1
and 0
, we use the length of string minus the index of the current character.
The pow()
function used to raise a value to an exponent is part of Compass Math helpers. In case you don’t want to use Compass or simply can’t use Compass, here is the pow()
function in pure Sass:
@function pow($val, $pow) {
$res: 1;
@while ($pow > 0) {
$res: $res * $val;
$pow: $pow - 1;
}
@return $res;
}
And of course, we add this to the result ($res
). Once we’re done with the string, we return the result to the rand()
function. Then, we simply return ($res % ($max - $min)) + $min
to the user resulting in a random number between min and max values.
Regarding the charsFromBase()
function, here is what it looks like:
@function charsFromBase($base: 10) {
/* Binary */
@if $base == 2 {
@return 0 1;
}
/* Octal */
@if $base == 8 {
@return 0 1 2 3 4 5 6 7;
}
/* Decimal */
@if $base == 10 {
@return 0 1 2 3 4 5 6 7 8 9;
}
/* Hexadecimal */
@if $base == 16 {
@return 0 1 2 3 4 5 6 7 8 9 a b c d e f;
}
/* Base 36 */
@if $base == 36 {
@return 0 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x
y z;
}
/* Base 64 */
@if $base == 64 {
@return A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h
i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9 + /;
}
@return false;
}
I only added most common standard bases (binary, octal, decimal, hexadecimal, 36, 64) but I guess we could probably add a couple of others. Actually this is already too much since we know the unique-id()
function will return a base16 or base36 encoded string (depending on the implementation they’ll keep).
That’s pretty much it. As I said at the beginning of the article, I couldn’t try this code since neither the unique-id()
nor the string manipulation functions are currently implemented in the Sass 3.3 Alpha version. So this is pretty much blind coding here!
If you think of anything that could improve this Sass random function, please be sure to tell. Meanwhile you can play with the code directly on this pen.
]]>Even if there is no practical application for such things, those were definitely fun Sass experiments and people seemed to be interested on Twitter so here is the how-to.
If you're not interested in learning how I did it and just want to see the code, you can play around those pens: Fibonacci number, Juggler sequence, Look-and-say sequence.
The Fibonacci number is one of those math sequences that follow simple rules. The one ruling the Fibonacci sequence is that each subsequent number is the sum of the previous two. Here are the 10 first entries of this sequence:
0 1 1 2 3 5 8 13 21 34 55
Pretty simple, isn't it? Of course there is no end to this sequence, so we need to fix a limit, like the number of entries we want; we'll call this number $n
. Okay, let's build the skeleton. To start the sequence we need 2 numbers, right?
@function fibonacci($n) {
$fib: 0 1;
@for $i from 1 through $n {
$fib: append($fib, $new);
}
@return $fib;
}
We're almost done! We only need to work this $new
variable. It's actually really simple:
$last: nth($fib, length($fib));
$second-to-last: nth($fib, length($fib) - 1);
$new: $last + $second-to-last;
And there you have it, the Fibonacci number in Sass. Here is the whole function and a usecase:
@function fibonacci($n) {
$fib: 0 1;
@for $i from 1 through $n {
$new: nth($fib, length($fib)) + nth($fib, length($fib) - 1);
$fib: append($fib, $new);
}
@return $fib;
}
$fib: fibonacci(10);
// -> 0 1 1 2 3 5 8 13 21 34 55 89
I'll be totally honest with you guys: I'm not sure what's the Juggler sequence is meant for. All I know is how it works. First of all, it is not an infinite sequence; secondly, it's different for each initial number.
Basically, every new entry in the sequence is the previous one either raised to 1/2
if it's even or raised to 3/2
if it's odd. Let's take an example with 3
as a starter:
3 // initial
5 // 3^3/2 = 5.196...
11 // 5^3/2 = 11.180...
36 // 11^3/2 = 36.482...
6 // 36^1/2 = 6
2 // 6^1/2 = 2.449...
1 // 2^1/2 = 1.414...
What's interesting about this sequence is it will eventually always end up with 1
. This is actually pretty cool because it means we know when to stop: when we reach 1. Ready?
First time ever I use a while loop. So proud! \o/
@function juggler($n) {
$juggler: ($n);
@while nth($juggler, length($juggler)) != 1 {
// What's $new?
$juggler: append($juggler, $new);
}
@return $juggler;
}
Anyway, I think the code is pretty self-explanatory. We append new values to the list until the last one is 1
, in which case we stop. All we have to do is to find $new
.
It is actually pretty simple. We only have to check whether the last number is odd or even:
$last: nth($juggler, length($juggler));
$x: if($last % 2 == 0, 1/2, 3/2);
$new: pow($last, $x);
Simple, isn't it? Here is the whole function and a usecase:
@function juggler($n) {
$juggler: ($n);
@while nth($juggler, length($juggler)) != 1 {
$last: nth($juggler, length($juggler));
$x: if($last % 2 == 0, 1/2, 3/2);
$new: pow($last, $x);
$juggler: append($juggler, $new);
}
@return $juggler;
}
$juggler: juggler(77);
// -> 77 675 17537 2322378 1523 59436 243 3787 233046 482 21 96 9 27 140 11 36 6 2 1`
The Look-and-say sequence is a little bit less mathematical than the Fibonacci number. Its name is self explanatory: to generate a new entry from the previous one, read off the digits of the previous one, counting the number of digits in groups of the same digit.
$look-and-say: 1, 11, 21, 1211, 111221, 312211;
Starting with 1
, here is what happen:
In case you're interested, there are numbers of fun facts regarding this sequence:
21
and even entries end with 11
1
is 50%
, of 2
is 31%
, of 3
is 19%
.You can even start the sequence with another digit than 1. For any digit from 0 to 9, this digit will indefinitely remain as the last digit of each entries:
d 1d 111d 311d 13211d 111312211d 31131122211d
To build this sequence with Sass, I got inspired by an old pen of mine where I attempted to do the sequence in JavaScript. The code is dirty as hell and definitely waaaay too heavy for such a thing, but it works.
Since Sass isn't as powerful as JavaScript (no regular expression, no replace...), I don't think there are many ways to go. If anyone has a better idea, I'd be glad to hear it! :)
As for the Fibonacci number, there is no end so we have to define a limit. Again, this will be $n
.
@function look-and-say($n) {
$sequence: (1);
@for $i from 1 through $n {
// We do stuff
}
@return $sequence;
}
Before going any further, I think it's important to understand how we are going to store the whole sequence in Sass. Basically, it will be a list of lists. Like this:
$sequence: 1, 1 1, 2 1, 1 2 1 1, 1 1 1 2 2 1;
So the upper level (entries) are comma separated while the lower level (numbers in each entry) are space separated. Two-levels deep list. Alright back to our stuff.
For each loop run, we have to check the previous entry first. Then, here is what we do:
Let's see:
@function look-and-say($n) {
$sequence: (1);
@for $i from 1 through $n {
$last-entry: nth($sequence, length($sequence));
$new-entry: ();
$count: 0;
@for $j from length($last-entry) * -1 through -1 {
$j: abs($j);
$last: nth($last-entry, $j);
$last-1: null;
$last-2: null;
@if $j > 1 {
$last-1: nth($last-entry, $j - 1);
}
@if $j > 2 {
$last-2: nth($last-entry, $j - 2);
}
// We do stuff
}
}
@return $sequence;
}
We use the dirty old negative hack to make the loop decrement instead of increment since we want to start from the last character (stored in $last
).
Since second-to-last and third-to-last characted don't necessarily exist, we first define them to null
then we check if they can exist, and if they can, we define them for good.
Now we check if $count = 0
. If it does, it means we are dealing with a brand new character. Then, we need to know how long is the sequence of identical numbers (1, 2 or 3). Quite easy to do:
$last
, $last-1
and $last-2
are identical, it's 3
$last
and $last-1
are identical, it's 2
Once we've figured out this number, we can prepend (remember we're starting from the end of the line) it and the value to the new entry.
Then, we decrement $count
from 1 at each loop run. This is meant to skip numbers we just checked.
@if $count == 0 {
@if $last == $last-1 and $last == $last-2 {
$count: 3;
}
@else if $last == $last-1 {
$count: 2;
}
@else {
$count: 1;
}
// Prepend new numbers to new line
$new-line: join($count $last, $new-entry
}
$count: $count - 1;
Once we're done with the inner loop, we can append the new entry to the sequence and start a new entry again, and so on until we've run $n
loop runs. When we've finished, we return the sequence. Here is the whole function:
@function look-and-say($n) {
$sequence: (1);
@for $i from 1 through $n {
$last-entry: nth($sequence, length($sequence));
$new-entry: ();
$count: 0;
@for $j from length($last-entry) * -1 through -1 {
$j: abs($j);
$last: nth($last-entry, $j);
$last-1: null;
$last-2: null;
@if $j > 1 {
$last-1: nth($last-entry, $j - 1);
}
@if $j > 2 {
$last-2: nth($last-entry, $j - 2);
}
@if $count == 0 {
@if $last == $last-1 and $last == $last-2 {
$count: 3;
} @else if $last == $last-1 {
$count: 2;
} @else {
$count: 1;
}
// Prepend new numbers to new line
$new-line: join($count $last, $new-entry);
}
$count: $count - 1;
}
// Appending new line to result
$sequence: append($sequence, $new-entry);
}
// Returning the whole sequence
@return $sequence;
}
And here is how you use it:
$look-and-say: look-and-say(7);
// -> 1, 1 1, 2 1, 1 2 1 1, 1 1 1 2 2 1, 3 1 2 2 1 1, 1 3 1 1 2 2 2 1, 1 1 1 3 2 1 3 2 1 1`
Caution! This sequence is pretty heavy to generate, and the number of characters in each entry quickly grow. On CodePen, it's getting too heavy after like 15 iterations. You could push it further locally but if your browser crashes, you won't tell you hadn't be warned!
One equally interesting thing is how I managed to display these sequences with line breaks and reasonable styles without any markup at all.
First things first: to display textual content without any markup, I used a pseudo-element on the body. This way, I can inject text into the document without having to use an extra element.
Now to display it with line-breaks, I had to get tricky! The main idea is to convert the list into a string and to join elements with a line-break character.
Thankfully, I recently wrote an article about advanced Sass list functions, and one of those is to-string()
.
I think you can see where this is going now: to display the Fibonacci number line by line, I simply did this:
body:before {
content: quote(to-string(fibonacci(100), ' \A '));
white-space: pre-wrap;
}
Here is what we do (from middle to edges):
\A
line-break characterThere you have it: displaying a whole list of data with line-breaks all through CSS. Pretty neat, isn't it?
Note: for the Look-and-say sequence, it takes one extra step to convert nested lists into strings first. You check how I did it directly on the pen.
This is pointless but definitely fun to do. And interesting. Now what else could we do? Do you have anything in mind? :)
]]>In case you have missed my first article about this topic, I recommand you to read Advanced Sass list functions.
Heys people, it’s been a while I haven’t posted anything! I have been pretty busy lately but I really miss writing so here it is: a short article about what’s new on my Sass list functions library.
Well first of all, it has been added as a Team-Sass repository on GitHub (the pen is still updated). You probably know the Team-Sass collective. They have done ton of awesome things like Breakpoint, Sassy Math and UIKit.
I am very glad to see my repo in there, so big thanks to them. :)
Even bigger news! It is now a Compass extension so you don’t have to copy/paste functions into your projects anymore. All you have to do is:
gem install SassyLists
config.rb
file: require 'SassyLists'
@import 'SassyLists';
Done. From there you can use all the functions you want. Isn’t it awesome? Plus all you have to do to update the library is reinstalling the gem with the same command as step 1. No more checking your functions are up to date and copy pasting all over again.
All of this thanks to Vinay Raghu who made the Compass extension out of my original work. A million thanks to him!
I have added a couple of functions to make the library even more awesome like purge()
, is-symmetrical()
, sum()
, chunk()
, count-values()
and remove-duplicates()
.
I can’t believe I didn’t make the purge()
function a while ago. Basically, it removes all non-true value of a list. Compass includes the compact()
function which does pretty much the same thing.
@function purge($list) {
$result: ();
@each $item in $list {
@if $item != null and $item != false and $item != '' {
$result: append($result, $item);
}
}
@return $result;
}
$list: a, b, null, c, false, '', d;
$purge: purge($list);
// -> a, b, c, d
I think the code is self-explanatory. We loop through all items of the list: if it’s not false, we append it then we return the new list. Easy peasy! It would be even easier if Sass had a boolean converter operator (!!
). Then we could do something like this @if !!$item { $result: append($result, $item); }
. Unfortunately, we can’t.
I don’t think this function has any major usecase, but you know, just in case I added it. It checks whether your list is symmetrical. It’s based on my reverse()
function.
@function is-symmetrical($list) {
@return reverse($list) == reverse(reverse($list));
}
Why don’t we compare the initial list with the reversed one? Because reversing a list modify its inner structure, resulting in a false assertion. This makes sure both list are properly compared.
Same here, I don’t think it has much point but I wanted to add it anyway. It takes all unitless number from the list and add them. The second parameter is a boolean enabling / disabling the removing of units. Basically, you can parseInt the value to get only the number.
@function sum($list, $force: false) {
$result: 0;
@each $item in $list {
@if type-of($item) == number {
@if $force and unit($item) {
$item: $item / ($item * 0 + 1);
}
@if unitless($item) {
$result: $result + $item;
}
}
}
@return $result;
}
$list: 1 2 3 4px;
$sum: sum($list); // -> 6
$sum: sum($list, true); // -> 10
The chunk()
function is based on the equivalent PHP function array_chunk()
. From the PHP.net manual:
Chunks an
$array
into$size
large chunks. The last chunk may contain less than$size
elements.
@function chunk($list, $size) {
$result: ();
$n: ceil(length($list) / $size);
$temp-index: 0;
@for $i from 1 through $n {
$temp-list: ();
@for $j from 1 + $temp-index through $size + $temp-index {
@if $j <= length($list) {
$temp-list: append($temp-list, nth($list, $j));
}
}
$result: append($result, $temp-list);
$temp-index: $temp-index + $size;
}
@return $result;
}
$list: a, b, c, d, e, f, g;
$chunk: chunk($list, 3);
// -> ( (a, b, c), (d, e, f), g)
We could probably make the code slightly lighter but I didn’t want to dig too deep into this. I’ll eventually clean this up later. Meanwhile, it works great. If you find a usecase, hit me up!
Same as above, the count-values()
function is inspired by array_count_values()
that counts each value of the given array.
Returns an array using the values of
$array
as keys and their frequency in$array
as values.
@function count-values($list) {
$keys: ();
$counts: ();
@each $item in $list {
$index: index($keys, $item);
@if not $index {
$keys: append($keys, $item);
$counts: append($counts, 1);
} @else {
$count: nth($counts, $index) + 1;
$counts: replace-nth($counts, $index, $count);
}
}
@return zip($keys, $counts);
}
It’s based on the built-in zip()
function that merges several lists into a multi-dimensional list by preserving indexes.
$list: a, b, c, a, d, b, a, e;
$count-values: count-values($list);
// -> a 3, b 2, c 1, d 1, e 1
There are times when you want to remove values that are present multiple times in a list. You had to do it by hand. Not anymore, I got your back.
@function remove-duplicates($list, $recursive: false) {
$result: ();
@each $item in $list {
@if not index($result, $item) {
@if length($item) > 1 and $recursive {
$result: append($result, remove-duplicates($item, $recursive));
} @else {
$result: append($result, $item);
}
}
}
@return $result;
}
$list: a, b, a, c, b, a, d, e;
$remove-duplicates: remove-duplicates($list);
// -> a, b, c, d, e
You can even do it recursively if you feel so, by enabling recursivity with true
as a 2nd argument. Nice, isn’t it?
Last but not least, I added a debug()
function to help you people debugging your lists. Basically all it does is displaying the content of your list like a console.log()
in JavaScript.
@function debug($list) {
$result: #{'[ '};
@each $item in $list {
@if length($item) > 1 {
$result: $result#{debug($item)};
} @else {
$result: $result#{$item};
}
@if index($list, $item) != length($list) {
$result: $result#{', '};
}
}
$result: $result#{' ]'};
@return $result;
}
$list: (a b (c d (e f ((g h (i j k)) l m))));
$debug: debug($list);
// -> [ a, b, [ c, d, [ e, f, [ [ g, h, [ i, j, k] ], l, m ] ] ] ]
Not only I try to add new functions but I also do my best to make all functions as fast as they can be and the library as simple to understand as it can be so you can dig into it to change / learn stuff.
For example, you know we have two remove functions: remove()
and remove-nth()
. I have simplified those two greatly:
@function remove($list, $value, $recursive: false) {
@return replace($list, $value, '', $recursive);
}
@function remove-nth($list, $index) {
@return replace-nth($list, $index, '');
}
Crazy simple, right? How come I haven’t done this earlier? Well let’s be honest, it has been a pain in the ass to come to this actually. I have faced an annoying issue: replacing by an empty string didn’t remove the element from the list, it simply made it disappear. The difference is that the length of the list remained unchanged and this is a big deal.
This is why I had to create the purge()
function. Both replace()
and replace-nth()
functions return a purged list, which means the empty strings get actually deleted from the list.
I have also used quite a couple of ternary operators along the way to make code lighter.
Quite a few things! I still have to clean some functions because they are kind of messy at the time. I could still add new functions if you think of something.
I am unable to wait for Sass 3.3, it is going to be awesome. First, the if()
will be completely reworked to have a built-in parser so it stop bugging around.
But there will also be new string manipulation functions (str-length()
, str-slice()
…) and the call()
function which will allow me to make a lot of new functions like every()
.
Oh, and of course Sass 3.3 will bring map support which will be a whole another story, with a ton of new functions to make. Anyway it is going to be amazing, really!
]]>Anyway, since we do not have a blog for Browserhacks, I have no choice but to announce all those things here. Quick article to explain all we’ve done since last major update.
We have decided to put aside our PHP tools to move to a Grunt workflow. As you may know, Grunt is a task-builder in JavaScript which is involving a lot of things to us.
Well obviously the first thing is we need to learn how to Grunt. Fabrice Weinberg has helped us for the porting (a million thanks to him) but at the end of the day we should be able to do this on our own.
Now we don’t use PHP anymore, we can host the whole thing on GitHub Pages which makes our repository always synchronized with the server and save us from all that server/hosting crap.
Ultimately, because Grunt is a task builder we will be able to do a lot of things we couldn’t imagine doing with a PHP setup. More importantly, we will be able to do a lot more things automatically especially testing hacks and stuff.
I think this is the one of the biggest change we’ve made to the site so far: merging both the home page and the test page. See, from the very beginning we had a separate test page. First it was all static, then I managed to generate it dynamically from our database.
This was a huge step forward but did we really need a separate page just for testing? It looks like no. It involved quite a bit of work but I’m glad we’ve made it. What do you people think?
Nothing changed in the way we test hacks though: if your browser recognize a line of code, it turns it into a lovely green. If you don’t like seeing green lines everywhere on the home page, you can still disable the tests by unchecking the checkbox Enable tests
at the top of the page. Or you could download a browser that doesn’t spread green lines everywhere… :)
There are still a couple of hacks that are not tested at all essentially all the hacks using IE-specific HTML comments. There is a simple reason for that: we do not know how to test them efficiently for now. We’ll think of something.
I think the very first issue we’ve opened for Browserhacks was a request for a copy-to-clipboard feature in order to have a hack ready to be used in a single click. Unfortunately, accessing the user’s clipboard is very difficult due to obvious security reasons.
This article by Brooknovak explains it in details, but basically here are the possible solutions to insert content into the clipboard:
clipboardData
: only available in IEZeroClipboard
: relies on FlashLiveconnect
: relies on JavaXUL
: only available in Mozilla, and kind of buggyexecCommand
: both hacky and buggyBasically it’s a mess and a cross-browser copy-to-clipboard is not realistic. So we had to think of something and by we I mean Tim Pietrusky of course. He came up with a clever idea which would allow the user to select a hack — for lack of copying — in one click.
Thus, he released a little JavaScript library called _select() that allow anything to be selected in a single click: paragraphs, images, whole documents, anything.
Anyway, we now use this cool little library to allow you to select a whole hack by simply clicking it. Then, you only have to press ctrl
/cmd
+ C
. Hopefully, this while make it easier to use for all of you with a trackpad.
The web is evolving very quickly and so do the browsers. Meanwhile we are trying to keep a well documented list of hacks, including hacks nobody will ever use because they are targeting dinosaur browsers. To make the list lighter we’ve set up a legacy system.
Basically all hacks targeting a browser we consider as a legacy browser won’t be displayed unless you tick the checkbox Show legacy
at the top of the page, in which case you see everything even those shits for IE 6.
Fortunately, we’ve made it very easy for us to decree a browser version as obsolete. All we have to do is change the version in this file. Every hack for this version and prior will be considered as legacy.
Soon enough, we’ll move the legacy limit for Internet Explorer to 7
. Soon enough my friends.
We thought it would be cool if you could link to a specific hack. It would make it easier to show a hack to someone, rather than copy/pasting or saying Section IE, sub-section Media hacks, 3rd hack on the 2nd column.
So every hack now has a unique ID. You can target a hack by clicking the little #
at the bottom right of the code.
This is a feature request by Lea Verou we’re honoring. She asked us for a way to know whether a hack is valid or not. By valid, we mean goes through CSS Lint without raising a warning.
Thanks to both Fabrice and Grunt, we managed to have all our CSS hacks checked with CSS Lint so you can know right away if a hack is valid or not. We’ll very soon have the same thing for JavaScript hacks with JSLint.
Awesome little feature: in case the hack is invalid, we display the warning raised by CSS Lint when you hover the little cross at the bottom right of the hack. Pretty cool, right?
We’ve also done a few little things, starting by improving the design. The header is now lighter, and the search bar only is fixed on scroll. We’d like opinion on this. You like it? You don’t? Why?
In addition we added, fixed and removed a lot of hacks.
Well, there is always work to do: if only fixing bugs, adding hacks, verifying hacks, and so on. We still have quite a couple of features on the way.
For example we need to give you a hint about the safety of a hack. Many of the hacks we provide are likely to break when passed in a preprocessor. Some of them can even break upon minification. While we can’t prevent this from happening, we should be able to tell you which hacks are safe and which are not. We only need to think of a way to test all this stuff with Grunt. If you want to help, you’d be more than welcome!
And last but not least, we want to be able to automate the testing. This is probably our biggest project for Browserhacks, and we’ve yet to figure a way to do so. Ultimately, we’d like to be able to make all tests and proof-tests automated so we don’t have to spend countless hours on Browserstack testing all the browsers / OS combos.
If you feel like helping for anything at all, that would be really awesome. Shoot us on Twitter or on Github.
Note: by the way, I’d really like not having to retweet everything from the Browserhacks Twitter account, so if you people could follow it, that’d be cool. :D
]]>But first, let me introduce the topic because you probably wonder what the hell I am talking about. Nothing better than a little example for this.
$value: 13.37;
$length: $value + em;
whatever {
padding-top: $length;
}
I want to play a game… This example: working or not working?
Well obviously, it works like a charm. That’s probably why you can see it so much in so many Sass demos.
Then you ask "if it works, why bother?". That’s actually a very fair question. Let’s continue our example, shall we? What if we apply — let’s say — the round()
function to our length?
$rounded-length: round($length);
Aaaaaand… bummer.
"13.37em" is not a number for 'round'.
Same problem with any function requiring a number (lengths are numbers in Sass) like abs()
, ceil()
, floor()
, min()
… Even worse! The unit()
function will also fail to return the unit.
This is because there is no unit since it’s now a string. When you append a string (in this case em) to a number (13.37), you implicitly cast it into a string.
Indeed, if you check the type of your variable with the type-of()
function, you’ll see it’s not a number but a string.
type-of($length); // string
There is a very simple solution. Instead of appending the unit, simply multiply the number by 1 unit. For example, 3 apples is strictly equivalent to 3 times 1 apple, right? Same thing.
$value: 13.37;
$length: $value * 1em;
whatever {
padding-top: round($length); // 13em
}
Problem solved! Please, use lengths when you need to, not strings.
]]>The following is a guest post by Hugo Darby-Brown, a talented frontend developer. I’m very glad to have him writing here today about a menu concept he came up with!
Before I start off I’d like to say that this is more of a proof of concept, than a method that I’d recommend using on your next project. This menu uses the WebKit-specific CSS declaration overflow-scrolling: touch
so support is a little flakey on older devices, but there are a few polyfills, which I will cover later (should you feel the urge to use this menu).
I wanted to create a horizontal scrolling navigation, similar to that of the iOS taskbar. Lots of responsive menu’s take the approach of displaying list items vertically on small screens, but I wanted to play with the idea of having menu items off the screen and swiping to reveal them.
I wanted the HTML markup to be as clean as possible, this I guess it’s pretty self explanatory.
<header>
<nav role="navigation">
<ul>
<li><a href="#">Home</a></li>
<li><a href="#">About</a></li>
<li><a href="#">Clients</a></li>
<li><a href="#">Contact</a></li>
</ul>
</nav>
<a href="#" class="nav-toggle">Menu</a>
</header>
This is the CSS that makes the effect happen. I’ve stripped out all the styling to highlight the key components that make the effect work.
nav {
overflow-x: scroll; /* 1 */
-webkit-overflow-scrolling: touch; /* 2 */
}
ul {
text-align: justify; /* 3 */
width: 30em; /* 4 */
}
ul:after {
/* 5 */
content: '';
display: inline-block;
width: 100%;
}
li {
display: inline-block; /* 6 */
}
Okay, so what’s going on here? In essence we’re creating a navigation that is too large for the screen. We set the overflow to scroll
, and the overflow-scroll type to touch
to allow for momentum scrolling. Explained in a bit more detail below:
auto
will work on some devices, but set this to scroll
just to be sure.justify
creates equally spaced li
's which takes the headache of working out margins.li
's width.text-align: justify
's version of a clearfix.We’re almost done, all we have to do is to deal with the toggling. We could use a CSS hack for this but this is not the point so we’ll just use a tiny bit of JavaScript.
So we set the max-height
of the navigation to 0
in order to initially hide it, and add a transition
so when we toggle the class .show
the menu will appear to slide in from the top, pretty basic mobile menu stuff.
nav {
max-height: 0;
transition: 0.6s ease-in-out;
}
.show {
max-height: 15em;
}
Throw in some JS to toggle the class, and you’ve got yourself a basic slide down mobile menu.
// jQuery version
$('.nav-toggle').on('click', function(e) {
$('nav').toggleClass('show')
e.preventDefault()
})
// Vanilla JS version
document.querySelector('.nav-toggle').onclick = function(e) {
var nav = document.querySelector('nav')
nav.classList.toggle('show')
e.preventDefault()
}
A mobile only menu isn’t much use these days is it? So using a few min-width
media queries we’ll turn this menu into a responsive mobile first menu.
@media (min-width: 31.25em) {
nav {
max-height: none; /* reset the max-height */
overflow: hidden; /* this prevents the scroll bar showing on large devices */
}
ul {
width: 100%;
}
.nav-toggle {
display: none;
}
}
The support is really not that bad, without being awesome either. As far as I know, it looks like this:
For unsupported browsers, there are a few of polyfills that can help you, should you want to use it:
I think you’ll see a lot more menu’s taking a horizontal approach in the future, but unfortunately Android 2.X still makes up for a 1/3 of market share of all Android devices, so until that reduces significantly I wouldn’t use this in any serious projects.
I would love to hear your thoughts on -webkit-overflow-scrolling: touch;
and the future possibilities.
I would usually embed the demo but, unfortunately iframes don’t play well with overflow-scrolling:touch
, so it’s best if you play around the code at CodePen (caution! iframes, doesn’t work great on some mobile browsers)!
Thanks for reading! If you think of anything to improve this menu concept, feel free to share. :)
]]>Mixins are usually quite easy to deal with. Functions are a little more underground in Sass. So what if we go through a couple functions (including useless ones) to see how we can build an efficient ones?
If you build mixins or just like to play around the syntax, you may have already faced a case where you’d need to strip the unit from a number. This is not very complicated:
@function strip-unit($value) {
@return $value / ($value * 0 + 1);
}
It might look weird at first but it’s actually pretty logical: to get a number without its unit, you need to divide it by 1 of the same unit. To get 42
from 42em
, you need to divide 42em
by 1em
.
So we divide our number by the same number multiplied by 0 to which we then add 1. With our example, here is what happen: 42em / 42em * 0 + 1
, so 42em / 0em + 1
so, 42em / 1em
so 42
.
@function strip-unit($value) {
@return $value / ($value * 0 + 1);
}
$length: 42em;
$int: strip-unit($length); // 42
There has been a request to include this function to Sass core but Chris Eppstein declined it. According to him, there is no good usecase for such a thing, and most of existing usages are bad understanding of how units work. So, no strip-unit()
into Sass!
I found this function in a Sass issue and was pretty amazed by its efficiency. All credits to its author.
Anyway, this is a function to clamp a number. Clamping a number means restricting it between min and max values.
4
clamped to 1-3
equals 3
.-5
clamped to 1-10
equals 1
.42
clamped to 10-100
equals 42
.@function clamp($value, $min, $max) {
@return if($value > $max, $max, if($value < $min, $min, $value));
}
To understand this function, you have to understand the if()
function. if()
is a function mimicing the well known one-line conditional statement: var = condition ? true : false
. The first parameter of the if()
function is the condition, the second one is the result if condition is true, and the first one is the value if condition is false.
Now back to our clamp function, here is what is going on:
$max
$min
$value
What I like with this method is it is very concise and damn efficient. With nested if()
, there is no need of conditional statements, everything lies in one single line.
Now what’s the point of this function? I guess that could be useful when you want to be sure the number you pass to a function is between two values, like a percentage for color functions.
$pc: percentage(clamp($value, 0, 100));
$darkColor: darken($color, $pc);
This one is a function by Chris Eppstein himself in order to convert an angle into another unit (because there are 4 different ways of declaring an angle in CSS). This one converts angles but you could probably do this for anything fixed (px, in, cm, mm).
@function convert-angle($value, $unit) {
$convertable-units: deg grad turn rad;
$conversion-factors: 1 10grad/9deg 1turn/360deg 3.1415926rad/180deg;
@if index($convertable-units, unit($value)) and
index($convertable-units, $unit)
{
@return $value / nth(
$conversion-factors,
index($convertable-units, unit($value))
) * nth($conversion-factors, index($convertable-units, $unit));
} @else {
@warn "Cannot convert #{unit($value)} to #{$unit}";
}
}
Here is how it works: you give it a value and the unit you want to convert your value into (let’s say 30grad
into turn
). If both are recognized as valid units for the function, the current value is first converted into degrees, then converted from degrees into the asked unit. Damn clever and pretty useful!
$angle-deg: 30deg;
$angle-rad: convert-angle($angle-deg, rad); // 0.5236rad
When you are working on very big Sass projects, you sometimes wish there was a @import-once
directive. As of today, if you import twice the same file, its content is outputed twice. Sounds legit, but still sucks.
While we wait for Sass 4.0 which will bring the brand new @import
(solving this issue), we can rely on this little function I found in an issue on Sass' GitHub repo.
$imported-once-files: ();
@function import-once($filename) {
@if index($imported-once-files, $filename) {
@return false;
}
$imported-once-files: append($imported-once-files, $filename);
@return true;
}
@if import-once('_SharedBaseStuff.scss') {
/* …declare stuff that will only be imported once… */
}
The idea is pretty simple: everytime you import a file, you store its name in a list ($imported-once-files
). If its name is stored, then you can’t import it a second time.
It took me a couple of minutes to get the point of this function. Actually, this is how you should probably use it:
/* _variables.scss: initialize the list */
$imported-once-files: ();
/* _functions.scss: define the function */
@function import-once($filename) {
@if index($imported-once-files, $filename) {
@return false;
}
$imported-once-files: append($imported-once-files, $filename);
@return true;
}
/* styles.scss: import files */
@import 'variables'; /* Sass stuff only */
@import 'functions'; /* Sass stuff only */
@import 'component';
/* _component.scss: wrap content depending on function return */
@if import-once('component') {
.element {
/* … */
}
}
Now if you add another @import "component"
in styles.scss
, since the whole content of _component.scss
is wrapped in a conditional statement calling the function, its content won’t be outputed a second time. Clever.
You probably wonder what prevents us from doing something like this:
/* styles.scss - this doesn’t work */
@if import-once('component') {
@import 'component';
}
Unfortunately, we cannot import a file in a conditional statement, this just don’t work. Here is the reason mentioned by Chris Eppstein:
It was never intended that
@import
would work in a conditional context, this makes it impossible for us to build a dependency tree for recompilation without fully executing the file -- which would be simply terrible for performance.
Sass 3.3 will introduce maps which come very close to what we often call associative arrays. The point is to have a list of key => value
pairs. It is already possible to emulate some kind of map workaround with nested lists.
Let’s have a look at the following list $list: a b, c d, e f;
. a
is kind of mapped of to b
, c
to d
, and so on. Now what if you want to retreive b
from a
(the value from the key) or even a
from b
(the key from the value, which is less frequent)? This is where our function is coming on stage.
@function match($haystack, $needle) {
@each $item in $haystack {
$index: index($item, $needle);
@if $index {
$return: if($index == 1, 2, $index);
@return nth($item, $return);
}
}
@return false;
}
Basically, the function loops through the pairs; if $needle
you gave is found, it checks whether it has been found as the key or the value, and returns the other. So with our last example:
$list: a b, c d, e f;
$value: match($list, e); /* returns f */
$value: match($list, b); /* returns a */
$value: match($list, z); /* returns false */
That’s all I got folk. Do you have any cool Sass functions you sometimes use, or even made just for the sake of it?
]]>Anyway, a couple of days ago I stumbled upon a comment in a Sass issue listing a couple of advanced Sass functions to deal with lists. I found the idea quite appealing so I made my own function library for for this. In my opinion, it is always interesting to go deeper than "it just works", so here is a short blog post to explain my code.
Let’s start with something very simple: two small functions to target first and last elements of a list. I don’t know for you, but I don’t really like doing nth($list, length($list))
. I’d rather do last($list)
.
$list: a, b, c, d, e, f;
$first: first($list); // a
$last: last($list); // f
Nice, isn’t it? Of course these functions are ridiculously simple to write:
@function first($list) {
@return nth($list, 1);
}
@function last($list) {
@return nth($list, length($list));
}
Since all values are also considered as single-item lists in Sass, using both functions on a single-element list will obviously returns the same value.
x
Sass already provides a index()
function to retreive the index of a given value in a list. It works well but what if the value is present several times in the list? index()
returns the first index.
Good. Now what if we want the last one?
$list: a, b, c, d z, e, a, f;
$first-index: index($list, a); // 1
$last-index: last-index($list, a); // 6
$last-index: last-index($list, z); // null
I made two versions of this function: in the first one, the code is simple. In the second one, the code is a little dirtier but performance should be better.
/**
* Last-index v1
* More readable code
* Slightly worse performance
*/
@function last-index($list, $value) {
$index: null;
@for $i from 1 through length($list) {
@if nth($list, $i) == $value {
$index: $i;
}
}
@return $index;
}
/**
* Last-index v2
* Less beautiful code
* Better performance
*/
@function last-index($list, $value) {
@for $i from length($list) * -1 through -1 {
@if nth($list, abs($i)) == $value {
@return abs($i);
}
}
@return null;
}
The second version is better because it starts from the end and returns the first occurence it finds instead of looping through all the items from the start.
The code is a little ugly because as of today, Sass @for
loops can’t decrement. Thus, we have to use a ugly workaround to make the loop increment on negative value, then use the absolute value of $i
. Not cool but it works.
You already know Sass comes with a built-in function to add values to a list called append()
. While it does the job most of the time, there are cases where you need to add new values at the beginning of the list instead of the end. Thus a new prepend()
method.
$list: b, c, d, e, f;
$new-list: prepend($list, a); // a, b, c, d, e, f
$new-list: prepend(
$list,
now i know my a
); // now, i, know, my, a, b, c, d, e, f
As you can see, the signature is the same as the one for the append()
function. Now, let’s open the beast; you’ll be surprised how simple this is:
@function prepend($list, $value) {
@return join($value, $list);
}
Yup, that’s all. join()
is a built in function to merge two lists, the second being appended to the first. Since single values are considered as lists in Sass, we can safely join our new value with our existing list, resulting in prepending the new value to the list. How simple is that?
n
We can append new values to a list, and now even prepend new values to a list. What if we want to insert a new value at index n
? Like this:
$list: a, b, d, e, f;
/* I want to add “c” as the 3rd index in the list */
$new-list: insert-nth($list, 3, c); // a, b, c, d, e, f
$new-list: insert-nth($list, -1, z); // error
$new-list: insert-nth($list, 0, z); // error
$new-list: insert-nth($list, 100, z); // error
$new-list: insert-nth($list, zog, z); // error
Now let’s have a look at the function core:
@function insert-nth($list, $index, $value) {
$result: null;
@if type-of($index) != number {
@warn "$index: #{quote($index)} is not a number for `insert-nth`.";
} @else if $index < 1 {
@warn "List index 0 must be a non-zero integer for `insert-nth`";
} @else if $index > length($list) {
@warn "List index is #{$index} but list is only #{length($list)} item long for `insert-nth'.";
} @else {
$result: ();
@for $i from 1 through length($list) {
@if $i == $index {
$result: append($result, $value);
}
$result: append($result, nth($list, $i));
}
}
@return $result;
}
Here is what happens: we first make some verifications on $index
. If it is strictly lesser than 1 or greater than the length of the list, we throw an error.
In any other case, we build a new list based on the one we pass to the function ($list
). When we get to the $index
passed to the function, we simply append the new $value
.
We’re good with adding new values to a list. Now what if we want to change values from a list? Like changing all occurences of a
into z
? Or changing the value at index n
? Sass provides nothing native for this, so let’s do it ourself!
x
$list: a, b, r, a, c a, d a, b, r, a;
$new-list: replace($list, a, u); // u, b, r, u, c a, d a, b, r, u;
$new-list: replace($list, a, u, true); // u, b, r, u, c u, d u, b, r, u;
As you can see, the function also deals with nested lists if you pass the 4th optional argument to true
. At index 5 and 6, we have 2 nested lists where a
has been replaced by u
in the second example.
@function replace($list, $old-value, $new-value, $recursive: false) {
$result: ();
@for $i from 1 through length($list) {
@if type-of(nth($list, $i)) == list and $recursive {
$result: append(
$result,
replace(nth($list, $i), $old-value, $new-value, $recursive)
);
} @else {
@if nth($list, $i) == $old-value {
$result: append($result, $new-value);
} @else {
$result: append($result, nth($list, $i));
}
}
}
@return $result;
}
Getting a little more complicated, doesn’t it? Don’t worry, it’s not that hard to understand. For every element in the list (nth($list, $i)
), we check whether or not it is a nested list.
$recursive
is set to true
, we call the replace()
function again on the nested list (recursive style!).$old-value
).
$new-value
.And there we have a recursive function to replace a given value by another given value in a list and all its nested lists.
n
Now if we want to replace a value at a specific index, it’s a lot simpler.
$list: a, b, z, d, e, f;
$new-list: replace-nth($list, 3, c); // a, b, c, d, e, f
$new-list: replace-nth($list, 0, c); // error
$new-list: replace-nth($list, -2, c); // a, b, c, d, z, f
$new-list: replace-nth($list, -10, c); // error
$new-list: replace-nth($list, 100, c); // error
$new-list: replace-nth($list, zog, c); // error
As you can imagine, it works almost the same as the insert-nth()
function.
@function replace-nth($list, $index, $value) {
$result: null;
@if type-of($index) != number {
@warn "$index: #{quote($index)} is not a number for `replace-nth`.";
} @else if $index == 0 {
@warn "List index 0 must be a non-zero integer for `replace-nth`.";
} @else if abs($index) > length($list) {
@warn "List index is #{$index} but list is only #{length($list)} item long for `replace-nth`.";
} @else {
$result: ();
$index: if($index < 0, length($list) + $index + 1, $index);
@for $i from 1 through length($list) {
@if $i == $index {
$result: append($result, $value);
} @else {
$result: append($result, nth($list, $i));
}
}
}
@return $result;
}
I think the code is kind of self explanatory: we check for errors then loop through the values of the $list
and if the current index ($i
) is stricly equivalent to the index at which we want to replace the value ($index
) we replace the value. Else, we simply append the initial value.
Edit (2013/08/11): I slightly tweaked the function to accept negative integers. Thus, -1
means last item, -2
means second-to-last, and so on. However if you go like -100
, it throws an error.
Hey, it’s getting pretty cool. We can add values to list pretty much wherever we want. We can replace any value within a list. All we have left is to be able to remove values from lists.
x
$list: a, b z, c, z, d, z, e, f;
$new-list: remove($list, z); // a, b z, c, d, e, f;
$new-list: remove($list, z, true); // a, b, c, d, e, f
Same as for the replace()
function, it can be recursive so it works on nested lists as well.
@function remove($list, $value, $recursive: false) {
$result: ();
@for $i from 1 through length($list) {
@if type-of(nth($list, $i)) == list and $recursive {
$result: append($result, remove(nth($list, $i), $value, $recursive));
} @else if nth($list, $i) != $value {
$result: append($result, nth($list, $i));
}
}
@return $result;
}
I bet you’re starting to get the idea. We check each element of the list (nth($list, $i)
); if it is a list and $recursive == true
, we call the remove()
function on it to deal with nested lists. Else, we simply append the value to the new list as long as it isn’t the same as the value we’re trying to remove ($value
).
n
We only miss the ability to remove a value at a specific index.
$list: a, b, z, c, d, e, f;
$new-list: remove-nth($list, 3); // a, b, c, d, e, f
$new-list: remove-nth($list, 0); // error
$new-list: remove-nth($list, -2); // a, b, z, c, d, f
$new-list: remove-nth($list, -10); // error
$new-list: remove-nth($list, 100); // error
$new-list: remove-nth($list, zog); // error
This is a very easy function actually.
@function remove-nth($list, $index) {
$result: null;
@if type-of($index) != number {
@warn "$index: #{quote($index)} is not a number for `remove-nth`.";
} @else if $index == 0 {
@warn "List index 0 must be a non-zero integer for `remove-nth`.";
} @else if abs($index) > length($list) {
@warn "List index is #{$index} but list is only #{length($list)} item long for `remove-nth`.";
} @else {
$result: ();
$index: if($index < 0, length($list) + $index + 1, $index);
@for $i from 1 through length($list) {
@if $i != $index {
$result: append($result, nth($list, $i));
}
}
}
@return $result;
}
We break down the list ($list
) to build up the new one, appending all the items except the one that was on the index we want to delete ($index
).
Edit (2013/08/11): same as for the replace-nth
function, I tweaked this one to accept negative integers. So -1
means last item, -2
means second-to-last, and so on.
We did a lot of important things already, so why not ending our series of functions with a couple of miscellaneous stuff? Like slicing a list? Reversing a list? Converting a list into a string?
$list: a, b, c, d, e, f;
$new-list: slice($list, 3, 5); // c, d, e
$new-list: slice($list, 4, 4); // d
$new-list: slice($list, 5, 3); // error
$new-list: slice($list, -1, 10); // error
In the first draft I made of this function, I edited $start
and $end
value so they don’t conflict with each other. In the end, I went with the safe mode: display error messages if anything seems wrong.
@function slice($list, $start: 1, $end: length($list)) {
$result: null;
@if type-of($start) != number or type-of($end) != number {
@warn "Either $start or $end are not a number for `slice`.";
}
@else if $start > $end {
@warn "The start index has to be lesser than or equals to the end index for `slice`.";
}
@else if $start < 1 or $end < 1 {
@warn "List indexes must be non-zero integers for `slice`.";
}
@else if $start > length($list) {
@warn "List index is #{$start} but list is only #{length($list)} item long for `slice`.";
}
@else if $end > length($list) {
@warn "List index is #{$end} but list is only #{length($list)} item long for `slice`.";
}
@else {
$result: ();
@for $i from $start through $end {
$result: append($result, nth($list, $i));
}
}
@return $result;
}
}
We make both $start
and $end
optional: if they are not specified, we go from the first index (1
) to the last one (length($list)
).
Then we make sure $start
is lesser or equals to $end
and that they both are within list range.
And now we’re sure our values are okay, we can loop through lists values from $start
to $end
, building up a new list from those.
Question: would you prefer a function slicing from index n
for x
indexes to this (so basically $start
and $length
)?
Let’s make a small function to reverse the order of elements within a list so the last index becomes the first, and the first the last.
$list: a, b, c d e, f, g, h;
$new-list: reverse($list); // h, g, f, c d e, b, a
$new-list: reverse($list, true); // h, g, f, e d c, b, a
As you can see, by default the function do not reverse nested lists. As always, you can force this behaviour by setting the $recursive
parameter to true
.
@function reverse($list, $recursive: false) {
$result: ();
@for $i from length($list) * -1 through -1 {
@if type-of(nth($list, abs($i))) == list and $recursive {
$result: append($result, reverse(nth($list, abs($i)), $recursive));
} @else {
$result: append($result, nth($list, abs($i)));
}
}
@return $result;
}
As we saw earlier, @for
loops can’t decrement so we use the negative indexes workaround to make it work. Quite easy to do in the end.
Let’s finish with a function I had a hard time to name. I first wanted to call it join()
like in JavaScript but there is already one. I then thought about implode()
and to-string()
. I went with the latter. The point of this function is to convert an array into a string, with the ability to use a string to join elements with each others.
$list: a, b, c d e, f, g, h;
$new-list: to-string($list); // abcdefgh
$new-list: to-string($list, '-'); // a-b-c-d-e-f-g-h
The core of the function is slightly more complicated than others because there is a need of a strictly internal boolean to make it work. Before I explain any further, please have a look at the code.
@function to-string($list, $glue: '', $is-nested: false) {
$result: null;
@for $i from 1 through length($list) {
$e: nth($list, $i);
@if type-of($e) == list {
$result: $result#{to-string($e, $glue, true)};
} @else {
$result: if(
$i != length($list) or $is-nested,
$result#{$e}#{$glue},
$result#{$e}
);
}
}
@return $result;
}
*Note: recursivity is implied here. It would make no sense not to join elements from inner lists so you have no power over this: it is recursive.
Now, my very first draft returned something like this a-b-c-d-e-f-g-h-
. With an extra hyphen at the end.
In a foolish attempt to fix this, I added a condition to check whether it is the last element of the list. If it is, we don’t add the $glue
. Unfortunately, it only moved the issue to nested lists. Then I had a-b-c-d-ef-g-h
because the check was also made in inner lists, resulting in no glue after the last element of inner lists.
That’s why I had to add an extra argument to the function signature to differenciate the upper level from the nested ones. It is not very elegant but this is the only option I found. If you think of something else, be sure to tell.
This function comes from Ana tudor. It aims at shifting the indexes of a list by a certain value. It may be quite tricky to understand.
$list: a, b, c, d, e, f;
$new-list: loop($list, 1); // f, a, b, c, d, e
$new-list: loop($list, -3); // d, e, f, a, b, c
Hopefully examples will make the point of this function clearer. The code isn’t obvious in the end, so I’ll just leave it here.
@function loop($list, $value: 1) {
$result: ();
@for $i from 0 to length($list) {
$result: append($result, nth($list, ($i - $value) % length($list) + 1));
}
@return $result;
}
Thanks a lot for the input Ana!
I guess that’s all I got folks! If you think of anything that could improve any of those functions, be sure to tell. Meanwhile, you can play with this pen.
]]>A couple of days ago I came with a fairly new solution (to me) and I must say I am pretty satisfied with it so far. I might stick with this mixin for the next projects. Thus, I wanted to share it with you people.
But first, let’s take a minute to think about what our mixin have to do:
What I always wanted to be able to is something like this:
.element {
absolute: left 1em top 1.5em;
}
And this should output:
.element {
position: absolute;
left: 1em;
top: 1.5em;
}
Unfortunately, we cannot do something like this in Sass and won’t probably ever be able to do so since we have no way to define custom properties. So let’s try to do something close.
First, we will build the skeleton for our mixin. We seem to want to call our mixin with the keyword absolute so why not calling it absolute
? And we pass it a list.
@mixin absolute($args) {
/* Mixin stuff here */
}
Now how does it work? Basically, you define the name of the offset you want to edit, and the next value is the value you want to assign to this offset. Then you repeat this for as many offsets as you want.
The first thing to do is to tell our mixin what are the keywords we want to check. Easiest thing to do so is to create a list inside our mixin:
@mixin absolute($args) {
$offsets: top right bottom left;
/* Order doesn’t matter */
}
Now, we will loop through the offsets and make three verifications:
$args
list,@mixin absolute($args) {
$offsets: top right bottom left;
@each $o in $offsets {
$i: index($args, $o);
@if $i and $i + 1 <= length($args) and type-of(nth($args, $i + 1)) == number
{
#{$o}: nth($args, $i + 1);
}
}
}
Okay, this might look quite complicated. Why don’t we simply take it over with comments?
@mixin absolute($args) {
/**
* List of offsets to check for in $args
*/
$offsets: top right bottom left;
/**
* We loop through $offsets to deal with them one by one
*/
@each $o in $offsets {
/**
* If current offset found in $args
* assigns its index to $i
* Or `false` if not found
*/
$i: index($args, $o);
/**
* Now we do the verifications
* 1. Is the offset listed in $args? (not false)
* 2. Is the offset value within the list range?
* 3. Is the offset value valid?
*/
@if $i and $i + 1 <= length($args) and type-of(nth($args, $i + 1)) == number
{
/**
* If everything is okay
* We assign the according value to the current offset
*/
#{$o}: nth($args, $i + 1);
}
}
}
I guess this is pretty clear now. Not quite hard in the end, is it?
We now have to deal with relative
and fixed
. I guess we could duplicate the whole mixin 3 times and simple rename it but would it be the best solution? Definitely not.
Why don’t we create a private mixin instead? Something that isn’t meant to be called and only helps us for our internal stuff. To do so, I renamed the mixin position()
and overloaded it with another argument: the position type.
Note: you might want to rename it differently to avoid conflict with other mixins of your project. Indeed “position” is a quite common keyword.
@mixin position($position, $args) {
/* Stuff we saw before */
position: $position;
}
And now, we create the 3 mixins we need: absolute()
, fixed()
and relative()
.
@mixin absolute($args) {
@include position(absolute, $args);
}
@mixin fixed($args) {
@include position(fixed, $args);
}
@mixin relative($args) {
@include position(relative, $args);
}
Almost done. To indicate position()
is a private mixin, I wanted to prefix it with something. I first thought about private-position()
but it didn’t feel great. In the end I went with _position()
. Since I use hyphens to separate words in CSS, the underscore was unused. No risk of conflicts with anything in a project!
Note: remember hyphens and underscores are treated the same way in Sass. It means -position()
will work as well. This is meant to be: “hyphens or underscores” is only a matter of presentational preference.
Using this mixin is pretty simple:
.element {
@include absolute(top 1em right 10%);
}
Outputs:
.element {
position: absolute;
top: 1em;
right: 10%;
}
Now, what if we try to do bad things like assigning no value to an offset, or an invalid value?
.element {
@include absolute(top 1em left 'HAHAHA!' right 10% bottom);
}
In this case:
top
will be defined to 1em
left
won’t be set since we gave it a stringright
will be defined to 10%
bottom
won’t be set since we didn’t give it any value.element {
position: absolute;
top: 1em;
right: 10%;
}
Clean handling of errors and invalid inputs. Nice!
The only thing that still bother me quite a bit with this is we still have to write @include
to call a mixin. It might seems ridiculous (especially given the speed at which we’re able to press keys) but having to type an extra 8 characters can be annoying.
Hopefully, some day we will see a shorter way to call mixins in Sass. Indeed, someone already opened the issue and the idea seems to have taken its way across minds including Chris Eppstein’s. The +
operator has been proposed (as in the indented Sass syntax) but this could involve some issues when dealing with mixins with no-arguments + @content
directive. Have a look at this:
abcd {
+ efgh {
property: value;
}
}
Is it supposed to mean "assign property: value
to a direct sibling efgh
of abcd
" or "call mixin efgh
in abcd
"? Thus someone proposed ++
instead and it seems quite good so far. No idea when or if we will ever see this coming though. Let’s hope.
I’m aware some of you won’t like this. Some will say it is overly complicated, some will say it is useless and some will say their mixin is better. In no way this is a better way than an other. It simply suits my tastes. I like the way it works, and I like the way I can use it.
Anyway, you can fork and play around this pen if you feel so. And be sure to hit me if you ever need anything or want to propose something new. :)
]]>The following is a guest post by Loïc Giraudel. Loïc is a JavaScript and Git expert at Future PLC (Grenoble, France) and my brother. He also knows his way in Bash scripting and frontend performance. I’m very glad to have him writing here. :)
You can’t talk about frontend performance without talking about images. They are the heaviest component of a webpage. This is why it is important to optimise images before pushing things live.
So let’s try a not-so-easy exercise: write a script to optimise a directory of images. Yup, I know there are a lot of web services offering this kind of feature but:
Shell scripting is a powerful skill to improve development efficiency by automating common tasks.
But first, a simple warning: don’t expect big optimisations. To have the best results, you have to decrease the image quality but it’s better to do this manually than automatically. We are going to script simple operations that remove metadata and other losslessly informations.
I’m working on Linux environment so this script will be a Bash script. Don’t worry! I will start with an introduction to Bash scripting in a Windows environment.
Bash is the GNU shell and the most common shell in Unix/Linux environment. A shell is a command-line interpreter allowing to access to all the functionalities of the OS. Shell scripting is a powerful skill to improve development efficiency by automating common tasks like building a project and deploying it.
To be able to run Linux scripts on Windows, there are two methods:
Since it can be quite a pain to set up a virtual machine, we will go for the latter with Cygwin. Cygwin is a Linux simulator. Go to the download section, grab the setup.exe
file and execute it to launch the installer. You can leave all settings by default until you get to the step asking you which packages to install.
To add a package, click on the "Skip" label to switch it to a package version. Search for the following packages and add them (clicking on "Skip" is enough):
Once Cygwin is fully installed, simply open a Cygwin terminal. Let’s create a workspace to host our optimisation script: we create a "workspace" directory in the current user home:
# Create the workspace folder
mkdir workspace
# Enter the workspace folder
cd workspace
By default, Cygwin is installed at C:/cygwin/
so our new directory is at C:/cygwin/home/[username]/workspace
(where [username]
is your username). Let’s create a "images" directory and fill it with some random images from the wild wild web (you can do this manually). For this exercise, we are going to take cat pictures because, you know, everybody love cats.
For each file, our script is going to run optipng and pngcrush for PNG files and jpegtran for JPG files. Before going any further and start writing the script, let’s make a first try with all of these tools starting with optipng:
Note: the -o7 parameter force optipng to use the slowest mode. The fastest is -o0.
Then pngcrush:
And now a JPG optimisation with jpegtran:
You’ll find the whole script at the end of the article. If you want to try things as we go through all of this, you can save it (optimise.sh
) now from this GitHub gist.
As obvious as it can be, our script needs some parameters:
-i
or --input
to specify an input directory-o
or --output
to specify an output directory-q
or --quiet
to disable verbose output-s
or --no-stats
to disable the output of stats after the run-h
or --help
to display some helpThere is a common pattern to parse script options, based on the getopt
command. First, we create two variables to store both the short and long version of each parameter. A parameter which requires a specific value (for example our input and output directories) must end with ":".
Then we are going to use the getopt
command to parse the parameters passed to script and use a loop to call functions or define variables to store values. For this, we will also need to know the script name.
Now, we have to create two functions:
usage()
function, called in the parameters loop if there is a -h
or --help
parameter,main()
function which will do the optimisation of the images.To be called, the functions must be declared before the parameters loop.
Let’s try our help function. To be able to run the script, we have to add execution mode (+x) on it with the command chmod
.
Pretty cool, isn’t it ?
Note, if you get a couple of errors like "./optimise.sh: line 2: $'\r' : command not found", you have to turn line endings in Unix mode. To do so, open optimise.sh
in Sublime Text 2 and go to View > Line endings > Unix.
And now, let’s create the main function. We won’t deal with --no-stats
and --quiet
parameters for now. Below is the skeleton of our main function; it might looks complicated but it’s really not trust me.
So our main function starts by initializing both input and output directories with passed parameters; if left empty we take the current folder as input and create an output folder in the current one (thanks to the mkdir
command once again).
The -p
parameter of the mkdir
command forces the creation of all intermediate directories if they are missing.
Once the input and output are ready, there is a little trick to deal with files containing spaces. Let’s say I have a file named "soft kitty warm kitty.png" (little ball of fur, anyone?), the loop will split this into 4 elements which will obviously lead to errors. To prevent this from happening, we can change the Internal File Separator (which is a space character by default) to set an end-of-line character. We will restore the original IFS at the end of the loop.
The image files are retrieved with the find
command, which accepts a regular expression as parameter. If the output directory is a subdirectory of input directory (which is the case if we don’t specify any of both) and if the output directory is not empty, we don’t want to process images from here so we skip filepaths which contain the output directory path. We do this with the grep -v $OUTPUT
command.
And then, we loop through the files and call an optimise_image
function with two parameters: the input and output filename for the image.
Now, we have to create this optimise_image()
method which is going to be fairly easy since we already have seen the command to optimise images before.
Let’s add some useful output to see progress and the final stats. What about something like this:
file1 ...................... [ DONE ]
file2 ...................... [ DONE ]
file_with_a_long_name ...... [ DONE ]
…
Would be neat, wouldn’t it? To do this, we first need to find the longest filename by doing a fast loop on the files.
Then before our main loop, we:
Finally, in the main loop we display the filename then the "." symbols and the the " [ DONE ]" string.
Let’s try it by running the following command:
# All parameters to default
./optimise.sh
# Or with custom options
./optimise.sh --input images --output optimised-images
# Or with custom options and shorthand
./optimise.sh -i images -o optimised-images
For the final stats we are going to display the amount of space saved. The optimise_image()</code> method will increase a
total_input_sizewith the filesize of the image to optimise, and a
total_output_size` with the filesize of the output image. At the end of the loop, we will use this two counters to display the stats.
To display human readable numbers, we can use a human_readable_filesize()
method, retrieved from StackExchange (let’s not reinvent the wheel, shall we?).
Let’s try it before adding the last bites to our code. Once again, we simply run ./optimise.sh
(or with additional parameters if needed).
Keep it up people, we are almost done! We just have to display progress output if the quiet mode is off.
Below lies the final script or you can grab it directly from this GitHub gist.
#!/bin/bash
PROGNAME=${0##*/}
INPUT=''
QUIET='0'
NOSTATS='0'
max_input_size=0
max_output_size=0
usage()
{
cat <<EO
Usage: $PROGNAME [options]
Script to optimise JPG and PNG images in a directory.
Options:
EO
cat <<EO | column -s\& -t
-h, --help & shows this help
-q, --quiet & disables output
-i, --input [dir] & specify input directory (current directory by default)
-o, --output [dir] & specify output directory ("output" by default)
-ns, --no-stats & no stats at the end
EO
}
# $1: input image
# $2: output image
optimise_image()
{
input_file_size=$(stat -c%s "$1")
max_input_size=$(expr $max_input_size + $input_file_size)
if [ "${1##*.}" = "png" ]; then
optipng -o1 -clobber -quiet $1 -out $2
pngcrush -q -rem alla -reduce $1 $2 >/dev/null
fi
if [ "${1##*.}" = "jpg" -o "${1##*.}" = "jpeg" ]; then
jpegtran -copy none -progressive $1 > $2
fi
output_file_size=$(stat -c%s "$2")
max_output_size=$(expr $max_output_size + $output_file_size)
}
get_max_file_length()
{
local maxlength=0
IMAGES=$(find $INPUT -regextype posix-extended -regex '.*\.(jpg|jpeg|png)' | grep -v $OUTPUT)
for CURRENT_IMAGE in $IMAGES; do
filename=$(basename "$CURRENT_IMAGE")
if [[ ${#filename} -gt $maxlength ]]; then
maxlength=${#filename}
fi
done
echo "$maxlength"
}
main()
{
# If $INPUT is empty, then we use current directory
if [[ "$INPUT" == "" ]]; then
INPUT=$(pwd)
fi
# If $OUTPUT is empty, then we use the directory "output" in the current directory
if [[ "$OUTPUT" == "" ]]; then
OUTPUT=$(pwd)/output
fi
# We create the output directory
mkdir -p $OUTPUT
# To avoid some troubles with filename with spaces, we store the current IFS (Internal File Separator)…
SAVEIFS=$IFS
# …and we set a new one
IFS=$(echo -en "\n\b")
max_filelength=`get_max_file_length`
pad=$(printf '%0.1s' "."{1..600})
sDone=' [ DONE ]'
linelength=$(expr $max_filelength + ${#sDone} + 5)
# Search of all jpg/jpeg/png in $INPUT
# We remove images from $OUTPUT if $OUTPUT is a subdirectory of $INPUT
IMAGES=$(find $INPUT -regextype posix-extended -regex '.*\.(jpg|jpeg|png)' | grep -v $OUTPUT)
if [ "$QUIET" == "0" ]; then
echo ––– Optimising $INPUT –––
echo
fi
for CURRENT_IMAGE in $IMAGES; do
filename=$(basename $CURRENT_IMAGE)
if [ "$QUIET" == "0" ]; then
printf '%s ' "$filename"
printf '%*.*s' 0 $((linelength - ${#filename} - ${#sDone} )) "$pad"
fi
optimise_image $CURRENT_IMAGE $OUTPUT/$filename
if [ "$QUIET" == "0" ]; then
printf '%s\n' "$sDone"
fi
done
# we restore the saved IFS
IFS=$SAVEIFS
if [ "$NOSTATS" == "0" -a "$QUIET" == "0" ]; then
echo
echo "Input: " $(human_readable_filesize $max_input_size)
echo "Output: " $(human_readable_filesize $max_output_size)
space_saved=$(expr $max_input_size - $max_output_size)
echo "Space save: " $(human_readable_filesize $space_saved)
fi
}
human_readable_filesize()
{
echo -n $1 | awk 'function human(x) {
s=" b Kb Mb Gb Tb"
while (x>=1024 && length(s)>1)
{x/=1024; s=substr(s,4)}
s=substr(s,1,4)
xf=(s==" b ")?"%5d ":"%.2f"
return sprintf( xf"%s", x, s)
}
{gsub(/^[0-9]+/, human($1)); print}'
}
SHORTOPTS="h,i:,o:,q,s"
LONGOPTS="help,input:,output:,quiet,no-stats"
ARGS=$(getopt -s bash --options $SHORTOPTS --longoptions $LONGOPTS --name $PROGNAME -- "$@")
eval set -- "$ARGS"
while true; do
case $1 in
-h|--help)
usage
exit 0
;;
-i|--input)
shift
INPUT=$1
;;
-o|--output)
shift
OUTPUT=$1
;;
-q|--quiet)
QUIET='1'
;;
-s|--no-stats)
NOSTATS='1'
;;
--)
shift
break
;;
*)
shift
break
;;
esac
shift
done
main
Of course this is just a simple sample (no pun intended); there is still a lot of room for improvements. Here is a couple of things we could do to improve it:
optimise_image
method (by the way, I highly recommand you to read this great article by Stoyan Stefanov),Let’s say things straight: I’d never have the opportunity to work on an image gallery before. Actually I did but back then I didn’t give a shit about performance, responsive design, high-density displays and all the topics cool kids always talk about. So this time I’ve been faced with some difficulties I had not encountered before; meaning I had to solve them by myself.
The main content of the site is photographs. The goal is to show them. Alix wanted something “Flickr-like”. Some sort of wall of photos that automagically adapt to the size of your screen. Kind of a cool layout, really.
At first I thought about doing it myself and then…
It would have been a pain in the ass to work out such a “complicated” layout so I thought about Masonry but that’s kind of old school, right? In the end, I went with Isotope for layouting the items.
Isotope has to be the best JavaScript plugin I ever worked with. Developed by David Desandro, you can think of it as Masonry 2.0. It makes complicated box-based layouts fully customizable and above all easy.
The idea is quite simple: you define a container that will draw boundaries for the layout and Isotope will move all its child elements according to the available room.
$container.isotope({
itemSelector: '.gallery__item',
masonry: {
columnWidth: 410
}
})
What is really nice is it takes advantage of hardware accelerated CSS transforms (essentially translate
) if the browser support them (else it falls back on regular TRBL offsets).
Anyway, I wanted to give some emphasis to the author content: her picture and her name, a short description and one or two ways to contact her. I first tried to include this as if it was another block in the layout, but it looked kind of crowded. Instead, I decided to go with a fixed column. Not only does it make this content more valuable but it also gives the page the space it needs to look nice.
Meanwhile the pictures are all wrapped in a regular unordered list which has a huge left margin (to bypass the fixed sidebar).
<li class="gallery__item">
<img
class="gallery__image"
src="images/filename.jpg"
alt="Alt text"
width="400"
height="266"
/>
</li>
We needed two major features for this image gallery:
The first one was pretty easy to do since Isotope comes with a built-in way to filter and sort items. In the documentation, they recommand using a class as a tag and apply it to all elements you want to assign this tag to. Then you create a little list with a jQuery selector as a data-filter
attribute (like .tag
). When you click on an element of this list, the plugin parses this data-attribute and displays nothing but the items matching the given selector.
I didn’t want to add classes for this so I added a data-album
attribute to every item and passed it the name of the album the image belongs to. Then, I give something like this to the data-filter
attribute of the filter list: [data-album\*='album-name']
(literally everything with a data-album
attribute containing 'album-name'). Easy peasy!
Regarding the second feature, I basically needed a little lightbox thingie to display an image in fullsize when clicked. I could have made one but since I am definitely not a JavaScript ninja, I would probably have ended with a code that could be improved. So I decided to rely on a built-in solution; I wanted something which is both nice and efficient so I went with Avgrund from Hakim El Hattab.
Avgrund is a very lightweight modal plugin that does exactly what I want: open a modal on click, close it with a close button or the ESC
key or clicking out of the lightbox.
One thing I wanted to do is to progressively display the pictures when loading the page: the first one being immediately displayed, then after a quick instant the second one, then the third, and so on until all images have been displayed. It’s definitely not a key feature, just eye sugar.
Isn’t it the perfect usecase for CSS animations? Let’s jump on this opportunity, it’s not that often we can safely use CSS animations. First the (really common) @keyframes
:
@keyframes opacity {
from {
opacity: 0;
}
to {
opacity: 1;
}
}
Now all I had to do was applying it to all items with a varying delay. The highest the index of the item in the list, the longest the delay. Perfect! Let’s loop! But wait… I don’t know the number of images in the page. I guess I could have gone to something like 100 to be sure it works everywhere but that would have bloated the CSS. Plus, I realized 20 is more than enough for most screens (including my 29").
@for $i from 1 through 20 {
.gallery__item {
opacity: 0;
animation: opacity 0.25s forwards;
}
.gallery__item:nth-of-type(#{$i}) {
animation-delay: $i * 0.1s;
}
}
Basically, I assigned opacity: 0
to all items so they don’t appear at first. Then all of them appear in about 250ms except the 20 first for which the animation is slightly delayed according to their index in the list. The only thing left to do was wrapping this into a Modernizr class (.cssanimations
) to be sure elements are not set to opacity: 0
on unsupported browsers.
Of course, we wanted the site to look acceptable (if not good!) on small devices. I wasn’t sure about the way to display this photo gallery on mobile so I opted for the easy solution: put everything into one column. I’ll try to think of something better for a future version.
Thankfully, Isotope handled most of the work for me: when there is no more room for two columns, it wraps everything into a single one. I only had to make the “sidebar” static, remove the left-margin of the main container, tweak a couple of things and it was okay.
Thus when you load the page on your phone, you’ll see nothing but the author information starting with her picture. You get to read the tiny description, then if you scroll there are photos. I think it’s nice this way; it kind of reproduces the "Hi, I’m X. Here is my work" social flow.
Regarding the modal, I first tweaked it on small screens so it takes almost the full viewport (leaving a small gap on each side). Then after some tests it occurred to me it made absolutely no point to have a modal on small devices so I simply removed it.
Let me tell you this: dealing with retina displays is a pain in the ass. God, this is so annoying. I don’t even know why we came to have such a thing… Did we really need it? In any case, this so-called “feature” involves a lot of things like:
There are quite a few ways to handle graphics on retina displays and it is no surprise most of them include getting rid off images when possible by using SVG, CSS, fonts, canvas… When it comes to real images, the number of solutions get lower: replace with CSS or replace with JavaScript. Or do nothing which is a solution I highly considered.
CSS image replacement within @media
blocks can work great… if you deal with background-images. It is even simpler with a preprocessor thanks to clever mixins (HiDPI for Sass, Retina.less for LESS).
But when you only have img
tags, you can’t do it with CSS only. So you start looking for a JavaScript solution and hopefully you find RetinaJS which is a great little script to handle high-density displays image convertion.
Basically the script parses all your image tags, make an AJAX request on your server to check whether there is a file with the same name and a @2x
appended right before the extension and if there is it swaps the current source with the one it found. All of this only if you are using a retina display obviously.
So I guess it is not that bad since this solution handles almost everything for us, but really. Does it worth it? Now we have to create like 2 or 3 files for each image so they can look good everywhere depending on the device’s capacities. It sucks.
Edit: I finally wrote my own script to deal with high-density displays because RetinaJS and LazyLoad were kind of conflicting with each other.
I think this is what took me the most time in the entire project even if I have a decent knowledge of frontend performance (without being an expert).
Of course I minified my stylesheets (with Sass) and my JS scripts (with YUI Compressor). I set up Gzip with .htaccess
along with some cache stuff. I even added a DNS prefect for Google Fonts. And even if all this stuff is really nice, the most important thing to optimize here is… images.
When I first set up the layout with images and all, I used really big pictures like 1600*1059px and I was like *"I resize them automagically with CSS"*. Sure. And the page weighed about 35Mb. Ouch.
I quickly understood I had to handle 2 files for each image: one for the thumbnail (400*266) and a bigger one for when you click on it (800+). This is what I did. I also smushed all images with JpegMini to remove unnecessary meta-data. The page went down to 750Kb. Not bad, right? Still not good enough though, especially for a small device on a crappy 3G connection.
The next step was to load images when they are needed. To put it simple, only load images that are actually displayed on the screen and not the one that are below the fold. This is called lazy loading. Thankfully, I found an amazing JavaScript plugin doing this. All I had to do was turning my markup into something like this:
<li class="gallery__item" data-album="album-name">
<img
class="gallery__image"
src="data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw=="
data-original="images/filename.jpg"
alt="Alt text"
width="400"
height="266"
/>
</li>
As you can see, the image source is a 1*1px blank GIF while the actual source lies in the data-original
attribute. Then the LazyLoad script checks all images to see whether they are above the fold or not; if they are, it swaps src
with data-original
. Everytime there is a scroll, it checks again. Lightweight and comfy.
Thanks to LazyLoad, I could bring down the page to 380Kb on a regular desktop screen. Definitely good. When viewing it on mobile, it goes down to … 700 bytes. Then it progressively load the images as the user scroll through them. How cool is that?
Even if it is a really really small projects (took me a couple of hours), I have to say I am pretty satisfied with the current look. It feels nice and smooth on both a desktop screen and a mobile device. Image performance was pretty fun to deal with and I learnt quite a few things in the way.
Anyway, if you got any tip, advice or comment, be sure to share! Meanwhile, you can still follow @whyalix on Twitter for more awesome photos. ;)
]]>Please note I may do a lot of comparisons with both a MacBook Pro since my girlfriend has one, a MacBook old generation since it’s the laptop I had before.
So, the Chromebook is an ultraportable laptop from Google running on Chrome OS (Linux core) mostly made for web browsing sold at a lovely price: €299.
Let’s start by the whole hardware part. First of all, I think it’s pretty nice even if it doesn’t compete with the MacBook Pro. Obviously.
I’m not so comfortable with very technical specifications so I’ll just leave it here: Exynos 5 Dual Core 1.7Ghz, 2GB DDR3 RAM, 16GB SSD storage.
Yes, you read right. Only 16GB hard-drive. This is because you’re meant to store all your data on the cloud (understand Google Drive). On the bright side: no fan, no noise. It’s absolutely silent. A fly would make more noise than this.
The whole shell is not in aluminium but in (good) plastic which is why you don’t have the same feeling as the MBP, but that’s definitely better when it comes to the weight (and the price). Indeed, the Chromebook 11.6" is very lightweight with only 1.1Kg (which has to be 2.4lbs to some of you) for 1.8cm thick.
In any case, you can tell it is a small computer halfway through a regular laptop and a netbook.
Chromebook 11.6" screen resolution is limited to 1366*768 which is enough if you want my opinion. However the screen quality isn’t awesome. Indeed, the Chromebook uses a lower-end display with pretty bad sight angles. So you wouldn’t buy the Chromebook for its screen.
Anyway, since this laptop is mostly made for web browsing and small applications, I think it’s more than enough.
Thus I can still enjoy fullscreen Youtube videos without having my eyes bleeding, but I still prefer watching movies on my TV when I’m home (especially given the screen size).
Note: the Chromebook comes with a 0.3 megapixels webcam.
There are two speakers on the bottom case which isn’t great when the laptop is put down (on a table, your knees, the couch, the bed) which is pretty much always the case. So the sound isn’t great.
It’s not awful, definitely not awful but it’s not high-quality sound. So if you want high-quality sound, you may need to plug in external speakers or headphones.
Oh man, this is good. You can rely on an average 6.5 hours battery lifetime with a regular usage. This may vary according to your usage; from 5 hours when streaming, up to 9 hours when casual browsing.
This is definitely a plus not having to worry much about the battery (at least for me).
The keyboard is pretty nice, really. Keys are large and smooth so typing is quite easy and most importantly, noiseless.
The Chromebook keyboard has been rearranged and optimized for web browsing. Indeed, the upper row contains "back", "forward" and "refresh" keys. You also have a "fullscreen" and a "alt-tab" like keys along with the traditional "luminosity", "volume" and "power" buttons.
Note the "caps-lock" button has been replaced by the "search" button (quite similar to the "Windows" button) and "ctrl" and "alt" are pretty huge.
One funny detail is how letters are in lowercase on the Chromebook keyboard when all other keyboards are using uppercase. Made me smile when I noticed it. :)
Chrome gives us a close to MBP trackpad with a double-finger vertical swipe to scroll and double-finger tap as a right click which is pretty neat.
However trible-finger swipe doesn’t go back in history like on a MacBook Pro; instead, it moves one tab to the left or to the right depending on the direction.
This is actually cool but kind of disturbing when you have a MBP background. In a way, it makes sense since there are "backward" and "forward" keys on the keyboard.
In any case, the surface is not only smooth and pleasant, but also quite large. It has to be the best trackpad I ever had on a not-Apple laptop.
The Samsung Chromebook has 2 USB ports (USB2 & USB3), a HDMI connection and a SD card reader. All of these are on the back of the laptop which I don’t like much; I’d rather have them on the side. No big deal for sure, but having to plug / unplug something on the back of the laptop can quickly become a pain in the ass.
Beware, HDMI connection may be a problem if you plan on connecting your laptop to a monitor because it’s generally VGA. So if you plan on using your Chromebook for talks, remember to buy an adaptor first. ;)
Chrome OS is freaking fast. It takes about 6 seconds between the moment you press the power button and the moment you’re on the desktop. This is probably due to the fact most applications and services run in the browser. Indeed, there are very few things installed on the computer aside of Google Chrome.
The Chromebook is a web-based laptop, running on a web-based OS to use web-based applications. If you can’t stand Google services or don’t plan on having internet, this laptop isn’t for you.
Thus, the OS taskbar shortcuts essentially open new tabs in Chrome to Google services (Gmail, Google Drive, Youtube, Chrome Web Store, Google Maps, Google+…).
On a side note, Chrome OS comes with a built-in yet very simplistic image editor. This may sounds irrelevant but when you have images you want to crop / rotate for articles, this is really rad.
Thankfully, Google thought about offline usage and made Gmail and Google Drive fully usable when not connected to internet. You can classify and even write mails on Gmail and write whole documents on Google Drive: everything will be synchronized / sent when WiFi is up again.
So this is pretty neat. Let’s say you have a couple of hours to kill in the train. No problem, you can deal with all your unread emails and work on your projets on Google Drive safely. No need for a connection.
There is even a section of the Chrome Web Store gathering offline applications (including games). Beware though, you need an internet connection to download these applications of course. ;)
So far, I am pretty satisfied with this computer. I guess you can say Chromebook is a no-surprise laptop: you know from start you will need WiFi to do most things. You know from start it’s not a fucking beast. You know from start it is mostly made for web browsing and writing documents.
Once you know that, you can decide if you still want / need a Chromebook or not. As far as I’m concerned, I don’t do much aside from reading Twitter, making demos on CodePen, and writing articles on a computer now that I don’t play games anymore and the Chromebook is really suited for this stuff.
]]>Anyway, I recently had the opportunity to write an article for CSS-Tricks about a Sass function involving quite a lot of list manipulation. I introduced the topic by clearing a couple of things regarding Sass lists but I wanted to write a more in-depth article.
First things first. Even creating a Sass list can be tricky. Indeed, Sass isn’t very strict with variable types. Basically it means you can process a list quite like a string, or use list functions on a string single value. It is basically kind of a mess.
Anyway, we have a couple of ways to initialize an empty variable (that could be treated as a list): There is a single way to initialize an empty variable (whatever that means), and it’s with null
.
$a: ();
$b: unquote('');
$c: null;
$d: (null);
Now we have defined our variables, we will check their type. Just for fun.
type-of($a) -> list
type-of($b) -> string
type-of($c) -> null
type-of($d) -> null
Since $c
and $d
are stricly equivalent, we will remove the later from the next tests. Let’s check the length of each variable.
length($a) -> 0
length($b) -> 1
length($c) -> 1
$a
being 0 item long is what we would have expected since it is an empty list. String being 1 item long isn’t that odd either since it is a string. However the null
variable being 1 item long is kind of weird; more on this later. It’s not weird either; null
is pretty much a value like another, so it has a length of 1.
This section has been quickly covered in the article at CSS-Tricks but since it is the very basics I have to put this here as well.
You can use spaces or commas as separator. Even if I feel more comfortable with commas since it is the classic separator for arrays (JavaScript, PHP…). You can check the separator of a list with the list-separator($list)
function.
$list-space: 'item-1' 'item-2' 'item-3';
$list-comma: 'item-1', 'item-2', 'item-3';
Note: As in CSS, you can ommit quotes for your strings as long as they don’t contain any special characters. So $list: item-1, item-2, item-3
is perfectly valid.
You can nest lists. As for JavaScript or any other language, there is no limit regarding the level of depth you can have with nested lists. Just go as deep as you need to, bro.
/* Nested lists with braces and same separator */
$list: (
('item-1.1', 'item-1.2', 'item-1.3'),
('item-2.1', 'item-2.2', 'item-2.3'),
('item-3.1', 'item-3.2', 'item-3.3')
);
/* Nested lists without braces using different separators to distinguish levels */
$list: 'item-1.1' 'item-1.2' 'item-1.3', 'item-2.1' 'item-2.2' 'item-2.3',
'item-3.1' 'item-3.2' 'item-3.3';
You can ommit parentheses (as you can guess from the previous example). You can define a non-empty list without any parentheses if you feel so. This is because -contrarily to what most people think- parentheses are not what create lists in Sass (except when empty); it is the delimiter (see below). Braces are a just a grouping mecanism.
This is the theory. I’ve noticed braces are not just a grouping mecanism. When manipulating matrices (4/5+ levels of nesting), braces are definitely not optional. This is too complicated for today though, we’ll dig into this in another blog post._
$list: 'item-1', 'item-2', 'item-3';
Indexes start at 1, not 0. This is one of the most disturbing once you start experimenting with Sass lists. Plus it makes a lot of things pretty complicated (cf CSS-Tricks article). No, it doesn’t.
nth($list, 0) -> throws error
nth($list, 1) -> “item-1”
Every value in Sass is treated as a list one-element list. Strings, numbers, boolean, whatever you can put in a variable. This means you’re fine to use some list functions even on things that don’t look like one.
$variable: "Sass is awesome";
length($variable) -> 1
Beware! If you remove the quotes around this string, it will be parsed as a 3-items long list (1: Sass; 2: is; 3: awesome). I recommand you quotes your strings to avoid some unpleasant surprises.
Before getting into the real topic, let’s make a round-up on Sass list functions.
length($list)
: returns the length of $list
.
nth($list, $index)
: returns the value at $index
position in $list
(throw an error if index is greater than the list length).
index($list, $value)
: returns the first index of $value
in $list
(or null
).
append($list, $value[, $separator])
: appends $value
to the end of $list
using $separator
as a separator (using the current one if not specified).
join($list-1, $list-2[, $separator])
: appends $list-2
to $list-1
using $separator
as a separator (using the one from the first list if not specified).
zip(*$lists)
: combines several list into a comma-separated list where the nth value is a space-separated lists of all source lists nth values. In case source lists are not all the same length, the result list will be the length of the shortest one.
This is where things get very interesting. And quite complicated as well. I think the best way to explain this kind of stuff is to use an example. I’ll use the same I talked about in my Sass talk at KiwiParty last month.
Please consider an extended selector like:
.home .nav-home,
.about .nav-about,
.products .nav-products,
.contact .nav-contact {
}
…based on a list of keywords $pages: ('home', 'about', 'products', 'contact')
. I found 3 ways to generate this selector based on the list; we’ll see them one by one.
But first, we will write the skeleton of our testcase:
$pages: (
'home',
'about',
'products',
'contact'
);
$selector: ();
@each $item in $pages {
/* We create `$selector` */
}
#{$selector} {
style: awesome;
}
This is the method I was still using a couple of weeks ago. It works but it involves an extra conditional statement to handle commas (also it’s ugly). Please see below.
@each $item in $pages {
$selector: $selector unquote('.#{$item} .nav-#{$item}');
// Add comma if not dealing with the last item of list
@if $item != nth($pages, length($pages)) {
$selector: $selector unquote(',');
}
}
Basically, we add the new selector to $selector
and if we are not dealing with the last item of the list, we add a comma.
Note: we have to use unquote('')
to treat our new selector as an unquoted string.
This one is the cleanest way you can use between the three; not the shortest though. Anyway, it uses append(..)
properly.
@each $item in $pages {
$selector: append($selector, unquote('.#{$item} .nav-#{$item}'), 'comma');
}
I think this is pretty straightforward: we append to $selector
the new selector by explicitly separating it from the previous one with a comma.
Probably my favorite version above all since it’s the shortest. It relies on implicit appending; very neat. so I highly recommend you to use the append(..)
way.
@each $item in $pages {
$selector: $selector, unquote('.#{$item} .nav-#{$item}');
}
Instead of using append(..)
and setting the 3rd parameter to comma
we implicitly do it via removing the function and using a comma right after $selector
.
The three versions we saw in the previous section work like a charm, the one you should use is really up to you although the one with append(..)
is definitely the cleaner way of handling this. You can also do it in some other more complicated and dirty ways.
Anyway, this shows why having a very permissive syntax can be complicated. As I said at the beginning of this post, you can do pretty much whatever you want and if you want my opinion this isn’t for the best.
]]>Because slides are not very self-explanatory, I think it might be cool to dig deep into the topic with expanded explanations, so that everybody can now fully understand what I was trying to explain. :D
Just for your information, here are my slides in French powered by Reveal.js:
I’ll skip the part where I introduce myself, I don’t think it has much point here. Instead, I’ll go straight to the introduction to explain what is a CSS preprocessor.
Sass -and pretty much any preprocessor- is a program aiming at extending a language in order to provide further features or a simplified syntax (or both). You can think of Sass as an extension of CSS; it adds to CSS what CSS doesn’t have and what CSS needs (or might need).
Among other things, Sass can be very useful for:
All of this is awesome. But when you just get started with Sass, you don’t really know what to do. So you declare a couple of variables, maybe make a mixin or two that you don’t really need and that’s pretty much it.
My talk aimed at giving some hints to get started with Sass, along with a collection of usecases and code snippets to show how to push stylesheets to an upper level.
The @extend
feature has to be the one which made Sass so popular compared to other CSS preprocessors including Less. Basically, you can make a selector inherits styles from another selector. It comes with abstract classes (also called placeholders), classes prefixed by a %
symbol instead of a dot, that are not compiled in the final stylesheet, thus that cannot be used in the markup. Their use is exclusive to the stylesheet.
As a very simple example, let’s make a placeholder of the clearfix method by Nicolas Gallagher.
%clearfix:after {
content: '';
display: table;
clear: both;
}
.element {
@extend %clearfix;
}
Outputs:
.element:after {
content: '';
display: table;
clear: both;
}
This example shows how we can use @extend
and placeholders in a very basic way. We can think of a slightly more complex usecase: some kind of message module. If you’re familiar with Twitter Bootstrap, then you’ll easily get what this is about: having a pattern for all types of message, then differenciate them based on their color chart (green for OK, red for error, yellow for warning, blue for information).
Check out this Pen!
With vanilla CSS, you have 3 ways to do this:
.message
class containing styles shared by all messages, then a class per message type. Pretty cool, no style repeated but you have to add two classes to your elements (.message
and .message-error
). Less cool.[class^="message-"]
. Clever, but attribute selectors are quite greedy peformance-speaking. Probably what I would do without Sass anyway.Let’s see how we can Sass it:
%message {
/* shared styles */
}
.message-error {
@extend %message;
$color: #b94a48;
color: $color;
background: lighten($color, 38%);
border-color: lighten(adjust-hue($color, -10), 20%);
}
.message-ok {
@extend %message;
$color: #468847;
color: $color;
background: lighten($color, 38%);
border-color: lighten(adjust-hue($color, -10), 20%);
}
.message-warn {
@extend %message;
$color: #c09853;
color: $color;
background: lighten($color, 38%);
border-color: lighten(adjust-hue($color, -10), 20%);
}
.message-info {
@extend %message;
$color: #3a87ad;
color: $color;
background: lighten($color, 38%);
border-color: lighten(adjust-hue($color, -10), 20%);
}
Outputs:
.message-error,
.message-ok,
.message-warn,
.message-info {
/* shared styles */
}
.message-error {
color: #b94a48;
background: #efd5d4;
border-color: #d5929c;
}
.message-ok {
color: #468847;
background: #b6dab7;
border-color: #83ba7a;
}
.message-warn {
color: #c09853;
background: #f4ede1;
border-color: #dbba9e;
}
.message-info {
color: #3a87ad;
background: #bfdcea;
border-color: #7ac4d3;
}
No styles repeated, no heavy selector, only one class assigned in the markup. Pretty neat. However, even if there is no repeated styles in the final CSS, there are repeated lines in the Sass stylesheet. They are repeated because the $color
variable changes in the scope. Isn’t this the perfect usecase for a mixin?
@mixin message($color) {
@extend %message;
color: $color;
background: lighten($color, 38%);
border-color: lighten(adjust-hue($color, -10), 20%);
}
Then, we change our Sass a little bit:
.message-error {
@include message(#b94a48);
}
.message-ok {
@include message(#468847);
}
.message-warn {
@include message(#c09853);
}
.message-info {
@include message(#3a87ad);
}
Quite cool, right? And this is only a very easy example of what you can do with @extend
and placeholders. Feel free to think of clever usecases as well.
REM (root EM) is awesome. Problem is IE8 doesn’t understand it, and we cannot cross it out of our support chart yet. We have to deal with it. Thankfully, it is simple enough to provide IE8 a fallback for REM: give it a PX value.
But duplicating every font-size
declaration can be tedious and converting REM to PX can be annoying. Let’s do it with Sass!
@mixin rem($value, $base: 16) {
font-size: $value + px;
font-size: $value / $base + rem;
}
.element {
@include rem(24);
}
Outputs:
.element {
font-size: 24px;
font-size: 1.5rem;
}
Calculations and fallbacks are handled by Sass. What about pushing things a little further by enabling some sort of flag for IE8 instead of always outputing the PX line? Let’s say you are using this in a constantly evolving project or in a library or something. You might want to easily enable or disable IE8 support.
Simple enough: wrap the PX line in a conditional statement (@if
) depending on a boolean you initialize either at the top of your stylesheet or in a configuration file.
$support-IE8: false;
@mixin rem($value, $base: 16) {
@if $support-IE8 {
font-size: $value + px;
}
font-size: $value / $base + rem;
}
.element {
@include rem(24);
}
Outputs:
.element {
font-size: 1.5rem;
}
On topic, I have writen a blog post about a robust and extensive PX/REM Sass mixin called The Ultimate REM mixin.
I don’t know for you but I don’t really like manipulating media queries. The syntax isn’t very typing-friendly, they require values, braces and all. Plus, I really like to manage breakpoints with keywords instead of values. Sass makes it happening; please consider the following mixin.
@mixin mq($keyword) {
@if $keyword == small {
@media (max-width: 48em) {
@content;
}
}
@if $keyword == medium {
@media (max-width: 58em) {
@content;
}
}
/* … */
}
When I want to declare alternative styles for a given breakpoint, I call the mq()
mixin with the according keyword as argument like @include mq(small) { … }
.
I like to name my breakpoints “small/medium/large” but you can chose whatever pleases you: “mobile/tablet/desktop”, “baby-bear/mama-bear/papa-bear”…
We can even push things further by adding retina support to the mixin (based on HiDPI from Kaelig):
@mixin mq($keyword) {
/* … */
@if $keyword == retina {
@media only screen and (-webkit-min-device-pixel-ratio: 1.3) only screen and (min-resolution: 124.8dpi) only screen and (min-resolution: 1.3dppx) {
@content;
}
}
}
We can now safely use this mixin as below:
.element {
/* regular styles */
@include mq(small) {
/* small-screen styles */
}
@include mq(retina) {
/* retina-only styles */
}
}
Outputs:
.element {
/* regular styles */
}
@media (max-width: 48em) {
{
/* small-screen styles */
}
}
@media only screen and (-webkit-min-device-pixel-ration: 1.3),
only screen and (min-resolution: 124.8dpi),
only screen and (min-resolution: 1.3dppx) {
.element {
/* retina-only styles */
}
}
The Sass way makes it way easier to debug and update in my opinion; lisibility is well preserved since alternative styles are based on keywords instead of arbitrary values.
Nowadays, using a grid system to build a responsive website has become a standard. There are a bunch of amazing grid systems out there, but sometimes you just want to build your own. Especially when you don’t need a whole Rube Goldberg machine for your simple layout. Let’s see how we can build a very simple grid system in Sass in about 12 lines:
/* Your variables */
$nb-columns: 6;
$wrap-width: 1140px;
$column-width: 180px;
/* Calculations */
$gutter-width: ($wrap-width - $nb-columns * $column-width) / $nb-columns;
$column-pct: ($column-width / $wrap-width) * 100;
$gutter-pct: ($gutter-width / $wrap-width) * 100;
/* One single mixin */
@mixin cols($cols) {
width: $column-pct * $cols + $gutter-pct * ($cols - 1) + unquote('%');
margin-right: $gutter-pct + unquote('%');
float: left;
@media screen and (max-width: 400px) {
width: 100%;
margin-right: 0;
}
}
Now let’s see what the code does exactly:
And there you have a very simple yet responsive Sass grid.
Check out this Pen!
CSS counters are part of the CSS 2.1 “Generated content” module (and not CSS3 as it is often claimed) making items numbering possible with CSS only. The main idea is the following:
counter-reset
,counter-increment
,:before
pseudo-element and content: counter(my-counter)
.Now, what if you want nested counters? Where headings level 1 are numbered like 1, 2, 3, headings level 2 are numbered x.1, x.2, x.3, headings level 3 are numbered x.x.1, x.x.2, x.x.3…
Doing this with vanilla CSS isn’t too hard but require code repetition and quite a lot of lines. With a Sass @for
loop, we can do it with less than 10 lines of code.
/* Initialize counters */
body {
counter-reset: ct1 ct2 ct3 ct4 ct5 ct6;
}
/* Create a variable (list) to store the concatenated counters */
$nest: ();
/* Loop on each heading level */
@for $i from 1 through 6 {
/* For each heading level */
h#{$i} {
/* Increment the according counter */
counter-increment: ct#{$i};
/* Display the concatenated counters in the according pseudo-element */
&:before {
content: $nest counter(ct#{$i}) '. ';
}
}
/* Concatenate counters */
$nest: append($nest, counter(ct#{$i}) '.');
}
The code might be complicated to understand but it’s really not that hard once you’re familiar with Sass. Now, we can push things further by turning this shit into a mixin in order to make it both clean and reusable.
@mixin numbering($from: 1, $to: 6) {
counter-reset: ct1 ct2 ct3 ct4 ct5 ct6;
$nest: ();
@for $i from 1 through 6 {
h#{$i} {
counter-increment: ct#{$i};
&:before {
content: $nest counter(ct#{$i}) '. ';
}
}
$nest: append($nest, counter(ct#{$i}) '.');
}
}
.wrapper {
@include numbering(1, 4);
}
*Note: a couple of people came to me after the talk to warn me against making table of contents with CSS generated content (pseudo-elements) since most screen-readers cannot read it. More a CSS than Sass issue but still, good to note.
The last part of my talk was probably slightly more technical thus more complicated. I wanted to show where we can go with Sass, especially with lists and loops.
To fully understand it, I thought it was better to introduce Sass loops and lists (remember there was quite a few people not knowing a bit about Sass in the room).
/* All equivalents */
$list: ('item-1', 'item-2', 'item-3', 'item-4');
$list: ('item-1' 'item-2' 'item-3' 'item-4');
$list: 'item-1', 'item-2', 'item-3', 'item-4';
$list: 'item-1' 'item-2' 'item-3' 'item-4';
So basically you can ommit braces and can either comma-separate or space-separate values.
A quick look at nested lists:
$list: (
(item-1, item-2, item-3) (item-4, item-5, item-6) (item-7, item-8, item-9)
);
// Or simpler:
// top-level list is comma-separated
// inner lists are space-separated
$list: item-1 item-2 item-3, item-4 item-5 item-6, item-7 item-8 item-9;
Now, here is how to use a list to access item one by one.
@each $item in $list {
/* Access item with $item */
}
You can do the exact same thing with a @for
loop as you would probably do in JavaScript thanks to Sass advanced list functions.
@for $i from 1 through length($list) {
/* Access item with nth($list, $i) */
}
Note: I have a very in-depth article on Sass lists scheduled for next week. Stay tuned for some Sass awesomeness. ;)
Now that we introduced loops and lists, we can move forward. My idea was to build a little Sass script that output a specific background based on a page name where file names would not follow any guide name (hyphens, underscores, .jpg, .png, random folders…). So home page would have background X, contact page background Y, etc.
// Two-levels list
// Top level contains pages
// Inner level contains page-specific informations
$pages: 'home' 'bg-home.jpg', 'about' 'about.png', 'products' 'prod_bg.jpg', 'contact'
'assets/contact.jpg';
@each $page in $pages {
// Scoped variables
$selector: nth($page, 1);
$path: nth($page, 2);
.#{$selector} body {
background: url('../images/#{ $path }');
}
}
Here is what happen:
nth()
function (e.g. nth($page, 1)
).Outputs:
.home body {
background: url('../images/bg-home.jpg');
}
.about body {
background: url('../images/about.png');
}
.products body {
background: url('../images/prod_bg.jpg');
}
.contact body {
background: url('../images/assets/contact.jpg');
}
I finished my talk with a last example with lists and loops, to show how to build an “active menu” without JavaScript or server-side; only CSS. To put it simple, it relies on the page name matching and the link name. So the link to home page is highlighted if it’s a child of .home
(class on html element); the link to the contact page is highlighted if it’s a child of the .contact
page. You get the idea.
To show the difference between nice and very nice Sass, I made two versions of this one. The first one is cool but meh, the second one is clever as hell (if I may).
Let’s save the best for last. The idea behind the first version is to loop through the pages and output styles for each one of them.
@each $item in home, about, products, contact {
.#{$item} .nav-#{ $item } {
style: awesome;
}
}
Outputs:
.home .nav-home {
style: awesome;
}
.about .nav-about {
style: awesome;
}
.products .nav-products {
style: awesome;
}
.contact .nav-contact {
style: awesome;
}
Not bad. At least it works. But it repeats a bunch of things and this sucks. There has to be a better way to write this.
$selector: ();
@each $item in home, about, products, contact {
$selector: append($selector, unquote('.#{$item} .nav-#{$item}'));
}
#{$selector} {
style: awesome;
}
Outputs:
.home .nav-home,
.about .nav-about,
.products .nav-products,
.contact .nav-contact {
style: awesome;
}
This is hot! Instead of outputing shit in the loop, we use it to create a selector that we then use to define our “active” styles.
Is there a performance difference between .message
and .message-error, .message-ok, .message-info, .message-warn
?
None. The only difference there is, is that in the first case you have to apply 2 classes to your element instead of one. Per se, having to use 2 classes on the same element isn’t a problem at all.
However what can be considered odd is that the 2 classes are co-dependant, meaning they only make sense when they are together. .message
on itself won’t do much since it has no color chart. Meanwhile .message-error
will look ugly since it lacks basic styles like padding and such.
Your @media mixin outputs a media-query block every time you use it. Ain’t you afraid of performance issues?
That’s true. Sass doesn’t automatically merge media queries rules yet. However, tests have been done and they showed that once GZipped, there was no difference between duplicated and merged @media queries.
"… we hashed out whether there were performance implications of combining vs scattering Media Queries and came to the conclusion that the difference, while ugly, is minimal at worst, essentially non-existent at best."
In any case, if you feel dirty having duplicated media queries in your final CSS even if it doesn’t make any difference, you can still use this Ruby gem to merge them. Please note merging media queries may mean reordering CSS which may involve some specificity issues. More tests needed.
Well, frankly it’s up to you. However note that the Compass team works directly with the Sass team so they are and will always be up to date. Bourbon otherwise is a side-project which isn’t affiliated with Sass in any way (well, except for the obvious).
Moreover, Compass comes with a sprite generator, Blueprint for your grids, a vertical rhytm module and a bunch of other cool things like math functions, image dimensions, and much more…
So if you want my opinion: definitely Compass.
Do you think we will ever be able to connect Sass to some kind of database to auto-supply lists or something?
Honestly, I don’t think so but I could be wrong. I know Sass developers want to do the right thing and try to stick as much as possible to the “CSS scope” (because in the end what is compiled is CSS). Connecting Sass to a database to generate lists then do things in Sass with these lists like this pure Sass chart would probably be out of line (yet awesomely clever).
However they are including awesome features in Sass starting with the next version (3.3) which should include sourcemaps, a huge improvements to the parent selector (&
), inner-string manipulation like str-index()
…
I think I’ve covered pretty much everything I talked about at KiwiParty, even more (I’m not limited by time on my blog). If you feel like some parts deserve deeper explanations, be sure to ask.
]]>It’s a one-day French conference at Strasbourg (France) gathering 10 speakers and more than 150 persons to talk about frontend technologies, accessibility, ergonomy, webdesign, and so much more.
So Friday June 28 at 2:00PM I was on stage to talk about Sass, and how we can use preprocessors to push our stylesheets to an upper level: “Kick-ass CSS with Sass” (“Des CSS kick-ass avec Sass”).
Basically, my talk is a collection of code snippets and real-life usecases of Sass to show how we can use a preprocessor further than declaring a couple of variables while keeping the code simple enough not to turn it into a Rube Goldberg machine.
I’d never been at a conference before so it was pretty much what I could expect. A bunch of awesome people, great talks, food, and most of all: web. It was an amazing day, for sure.
Plus, we had the opportunity to meet two awesome dudes of our field:
I could also meet all those great French people from Twitter and put some faces on names. Probably the best thing of this day. :)
It went great. People seemed very interested in the topic and I felt like they were understanding the main point of the conference, so it’s something!
I was kind of nervous, obviously. My laptop dying the night before the big day was not to help though… Hopefully I could buy a Chromebook (which will probably the topic for another article) and all went well.
This experience was kind of incredible actually. I walked into that room where dozens and dozens of people were waiting for me to talk to them. My hands were kind of sweaty and I was shivering a little at first but it all disappeared when I started talking.
Then a little voice popped in my head saying “this is too complicated”, or “this is obvious to you but not for them” or “what the hell are you trying to explain?”. It stayed there during the whole conference and was kind of disturbing. I couldn’t have real-time feedback of course (imagine what it would look like) so I had no idea if people were enjoying my talk or hoping it was soon finished.
In the end, I had a couple of questions (most of them very relevant) which I was prepared to answer. Yeah that’s right, I prepared the Q&A people!
Well, it’s been almost 9 months now I am using Sass on an almost-daily basis. In the last few months, I’ve been intensively hacking around the language, reading every tiny bit of documentation I could find (especially Compass' one) to push things further.
When my girlfriend suggested I give a talk at KiwiParty back in April, I laughed. I had nothing to talk about. Then, it was kind of an inception; the idea kept spinning in my head looking for a topic to speak about. Until I found it: Sass.
Retrospectively, it was a risky bet. Speaking of CSS preprocessors can be quite complicated, especially in France where the topic is pretty controversial. I could have been faced to fervent opposants to CSS preprocessing, turning my talk into a troll. Luckily, I haven’t been.
Unfortunately, my talk wasn’t recorded so no way to access it online, sorry people. :(
Regarding pictures, my girlfriend took a couple of photos as you can see in this article. Otherwise, you can find more pics of the whole event on Flickr tagged #KiwiParty.
Oh, and of course here are my slides (in French, but it’s mostly code); read top to bottom then left to right. I’m currently writing a blog post digging deep into my slides, so be patient English-speaking friends. :)
So all in all, it was an amazing experience. Big thanks to the Alsacreations team and to all of you who were in the room to hear my talk.
I hope to be part of it next year as well. Meanwhile, I’m available to hack your Sass. ;)
Cheers!
]]>God, I suck at JS. Hopefully, I get slightly better each day. My code is more and more structured but I’m far from being even acceptable at JavaScript. So many things are still a mess in my head. I’d really like to get better before the end of the year. At least good enough to do some simple stuff without struggling for hours.
I have good hope that once I’ll be more comfortable with JavaScript, I will also be more comfortable with Canvas. This is a fantastic tool offering close to endless possibilities. I had the opportunity to use it once or twice during the past, but it was mostly coming in some kind of library.
I know there are some pretty good tutorials out there to learn how to use Canvas but I didn’t take the time to dig into them. What do you think people, worth it? Or we can live without it for now?
I have some basic knowledge regarding the Flexible Box Module. I even made all the CSS-Tricks Almanac entries on this stuff and the complete guide for it. But I am not comfortable with this. I haven’t implemented it in any project yet, and can’t wait to do so.
I truely believe Flexbox is the future of CSS layout, especially when it comes to RIA (Rich Internet Applications) and web apps. Floats and inline-blocks are sucky as hell when it comes to complex architectures.
On a lesser extend (mostly because of the browser support), I think the Grid Layout Module is equally very interesting. Layout possibilities are endless, and it’s only the beginning: the specification is still not over.
I consider myself a good “CSS developer” (if such a thing even exists); I have solid knowledge on the language, I know how to do pretty much anything that comes in a standard-to-advanced frontend project and I’m aware of performance, accessibility, responsive design and all the topics that cool kids talk about. But if there is one thing I still don’t get properly, it has to be z-index
.
I swear, people behind the specification of this property were either sadist or completely high. If you’re asking me, this is by far one of the most complicated things to understand in CSS2.1. This excellent article by Philip Walton proves it.
I feel like I lack some experience when it comes to large-scale projects. Same goes with team work. Even if I work on a team, I use to be the only developer on projects I’m working on. Other people are either designers or managers, and such.
I’d really like to work with several developers on a same project before the end of the year. I’m sure I will learn a ton of things. It is very different to work with multiple people on the same code basis from working alone with one’s own code.
It has been week since I first designed the first draft of this post in my head. Back then, I wanted to include “go to a conference”, and even “speak at a conference”. Hopefully, I don’t have to include them since I’ll be attending the KiwiParty as a speaker in late June. Very excited. :)
Anyway, this is only the tip of the iceberg, there are so much more things I would love to learn and do before even the end of the month. Alas, time isn’t expandable!
What about you people? What is it that you’d like to be able to do before the end of 2013?
]]>I take this article as an opportunity to make some kind of assessment. Any comment appreciated of course. ;)
In 6 months, I’ve released exactly 30 articles (this is 31st) including 3 specials: 2 interviews (one of me from Clément Osternaud and one of Manoela Ilic from me and a guest post (from Ana Tudor).
The most successful article is definitely Dig deep into CSS linear gradients (the guest post from Ana Tudor) with more than 30,000 viewed pages and 32 comments. Then it is a very old one: Why I switched from LESS to Sass with over 12,000 viewed pages for 29 comments. The third most read article is My CSS aha moment with about 11,000 viewed pages and 30 comments. Note that other articles like Why a CSS alternative to the select element is not possible, Simulate float: down and Future of CSS layout: Grid has been very welcomed as well.
All articles have drawn more than 82,000 unique visitors for about 150,000 viewed pages. The average visit duration is close to one minute, and the number of page per visit is around 1.4.
Enough about me, let’s talk about you: 28% of users come from United States, 8% come from United Kingdom, then a little more than 7% from France.
Regarding browser, I’m a lucky bastard: Chrome gets more than 60% of the browser share on my site, then it’s Firefox with 17.5%, and then Safari with 16%. Internet Explorer comes further with 1.8%.
About 14% users are reading articles on their mobile phone (cf. fig 2). and 7 out of 10 of them use an Apple phone. The 3 others use Android (I’m in).
The most important traffic source is definitely Twitter (actually t.co), then it’s Reddit and then Codrops. A little further behind, we have Sidebar.io, Google and CSS-Tricks.
You tell me. Is there anything you’d like to see?
I’m thinking about adding an excerpt for each blog post to the home page. I think some people asked for it. My problem with this is that excerpt are a pain in the ass to do if you want to do it right, and when generated, it generally sucks. I’ll see if I can come up with a decent solution.
I would also like to add a small banner for Browserhacks in the sidebar. I’ll work that out anytime soon.
Anyway, thank you all for reading. That’s what keeps me writing. ;)
]]>I think mine was like two years ago or something and since then my CSS has been better than ever. I don’t really remember how it came up, but suddenly I understood that absolutely all elements on a page are rectangles.
God, that sounds stupid now but it really helped in understanding how to make efficient CSS. You know, at first you don’t necessarily get that a line of text isn’t shaped around the text but follows the same pattern as all other elements.
And when you get that and most generally the whole box-model (that says width equals width + padding-left + padding-right + border-left + border-right.
), everything becomes so simple.
Seriously, the first thing to understand when learning CSS is that every element is following the same pattern: a content-box in a padding-box in a border-box in a margin-box; I don’t know why my teachers didn’t even start with that.
Once you get that, it’s really not that hard to produce correct (not necessarily efficient, but still correct) CSS.
What about you? What was your Aha moment?
]]>My name is Kitty Giraudel. I’m a frontend developer on a work-based learning at Crédit Agricole Sud Rhône Alpes in Grenoble (France), hoping to work in a web agency starting from September. I’m really into frontend languages especially CSS & JS, as well as everything that comes with those languages: ergonomy, performance, accessibility, user experience, and much more.
I am the co-author of Browserhacks — a website aiming at gathering all the dirty little secrets from browsers to do some browser sniffing ; not that I support that kind of thing but someone had to do such a tool. ;) I also developed Wild Web Watch — a web-related watch tool (which unfortunately gets old pretty badly). I also take care of the Sass port of Raphaël Goetter’s framework, KNACSS.
Beside that, I write a lot for the web, starting on my site but for Codrops and CSS-Tricks as well.
Since I didn’t know what to do after highscool, I decided to join Ariès Grenoble - a school for computer graphics (web, print and 3D stuff) in order to become a Game Designer.
During the preparatory class, I realized I don’t like 3D stuff, which makes me revise my path a little bit. Since I have two sisters and a brother in the web industry, I decided to join the “Webdesign” formation just to “see what it looks like”. A long and kind of boring year since this formation included a lot of print design (yes, this is weird for a Webdesign formation).
In September 2011, I got into the “Webmaster” formation (still in Ariès) in a work-based learning at Crédit Agricole Sud Rhône Alpes; I felt like I fit. I wrapped my head around an array of languages — going from HTML/CSS to PHP/MySQL passing by ActionScript 3 and Flex — and got my diploma with commendation, pretty confident in the idea of becoming a web developer.
I wanted to push this idea further last year by joining what comes close to “Computer Science” (still in Ariès) but I have to say I really don’t belong here. Backend languages like Java & C++ and server stuff really don’t please me.
That ain’t easy. I think the biggest “problem” of client-side languages is that they are dependent on the client. This implies a lot of hacks and tricks to make everything work everywhere. This is even more true today with mobile devices like tablets and smartphones or even TV screens! So this asks for a lot of patience and experience (the latter comes with time, hopefully).
I also think we really have to love experimenting and trying new things. We work with constantly evolving languages which implies reading as much docs and tutorials as we can. Being aware of incoming things is part of a web developer’s job.
My favourite thing in my job has to be learning things. It’s definitely because I’m passionate that I’m comfortable with some things today. As good as my web teacher has been, I don’t owe him my skills (not all of them at least). Long story short, I enjoy reading web related stuff.
Beyond reading, it’s great to be able to easily discover and learn new things like CSS features, JavaScript APIs, preprocessors and much more (especially thanks to tools like CodePen and GitHub). And if we ever happen to use what we’ve learnt in real-life projects, then it’s even better!
Ironically, even if I am able to define what I like in my job I don’t think I’m able to tell what I like the least. Maybe not being always able to use everything I know in real-life projects because of technical constraints like performance, maintainability or browser support. But this is part of the job; we can’t use everything we know, especially when it comes to new — somewhat borderline — features ("hello CSS grid!").
But this “bad side” of our job is what makes it interesting. Producing clean, maintainable and future-proof code is what makes the frontend developer work fun.
Haha! That’s a tough one! I guess I’d love to work in a company with an interesting web unit with a dedicated team to move things forward. I particulary hope to keep my thirst of learning. If I manage to keep that, I’ll consider myself happy. ;)
]]>And I always tell myself the same thing “Yeaaaaah… so that should explain why your CSS is a fucking mess”.
Yes. CSS has a very easy syntax based on english words. I don’t think it could be much simpler since it can be summed up in 3 words: selector, property, value.
A 8-year-old child could do some CSS without even having any explanation on how to do so. Even HTML has a more complicated syntax than CSS since there are some elements which need a closing tag, some don’t, some have attributes, some don’t, some can’t be inside others and so on. CSS is always the same.
selector [, selector2, …] [:pseudo-class] {
property: value;
[property2: value2;
…]
}
Most of all, CSS means something. It uses real words, understandable by anyone. When you read .element { color: red; }
, you can be pretty sure it means an item called “element” is red. It’s a no brainer.
The first “problem” (for lack of a better word) with CSS is that it is a constantly evolving language. It was first introduced in 1994 if no mistake; so almost 20 years ago. After 3 major versions (CSS1, CSS2 and CSS2.1), CSS is now divided into modules growing at their own speed (Colors Level 3, Selectors Level 4, etc.). It means you cannot simply “learn CSS” then don’t get back to it. You can learn the bases, yes but it’s not enough.
Some things I learnt 2 years ago are irrelevant now, and some things I’m learning today might disappear or become bad practices in the future. It is a non-stop evolution — which is cool but — which requires developers to be very careful.
The thing is, since CSS is a language compiled on the client side (meaning by the browser itself), its interpretation depends on the compiler (once again, the browser).
Yes, HTML and JavaScript as well. But unless you’re using new HTML5 elements (which don’t provide much more than semantic), your HTML — as long as it is valid — won’t differ from one browser to another.
JavaScript is kind of like CSS. The interpretation depends on the JavaScript engine version. For example, Internet Explorer 9 doesn’t use the same ECMAScript engine as Firefox or Chrome (Chakra for IE9, SpiderMonkey for Firefox, V8 for Chrome).
Anyway, in order to write consistent CSS, you have to know which browser supports which feature, or partially support them, and how to draw fallback, when to use hacks, and so on. It requires some knowledge, and most of all, some experience.
Take the Flexbox module for example. It has been introduced in 2009 and has known 3 different syntaxes since then resulting in a blurry mess when trying to have the best browser support:
.flex {
-ms-box-orient: horizontal;
display: -ms-flexbox;
display: -webkit-flex;
display: -moz-flex;
display: -ms-flex;
display: flex;
}
This is the kind of thing that makes CSS tricky (some people would say annoying).
CSS isn’t easy. Combining a very permissive (somewhat broken) syntax with constantly evolving features and rendering inconsistencies makes CSS not that easy at all. Yes, the syntax is simple, but a simple syntax doesn’t make an easy language.
And when you have to deal with performance, modular architecture, and responsive webdesign, it becomes even less easy. But that’s a whole 'nother story.
]]>Yeah, it’s very nice. Even if it’s not an alternative to the Select Element. This is not possible. You cannot do a pure CSS alternative to the Select Element.
There is more than just a click on a button opening a list of options to the <select>
element. It involves accessibility, usability, processing, shadow DOM and a lot of various options. A lot of things that CSS can’t do. That CSS isn’t supposed to do.
Now don’t get me wrong, the author at Pepsized did a wonderful job on this article, regarding both the design and the usability (which is far better than what I did at Codrops). (S)He is a good CSS developer, I don’t even question that. But once again, (s)he didn’t provide a CSS alternative to the <select>
element. Let me clear things up point per point.
The major concern here is accessibility. The default <select>
element is completely usable either with a mouse or a keyboard, following this process:
<select>
element
Keyboard: use the tab key to focus the <select>
element<select>
element
Keyboard: press enterWhile making a pure CSS dropdown easily usable with the mouse can be done by pretty much any one with some CSS knowledge, making it usable with keyboard navigation is a whole other story.
However, it’s doable. You won’t have exactly the same process as above, but you’ll probably be able to pick your option with the arrow keys and such.
Anyway, this introduces some new behaviour (you may call this inconsistencies) for people who can’t use a mouse. Yes, not having to press enter (steps 2 and 4) is probably no big deal for you and I, but for — let’s say — a blind user, it may be confusing.
Mobile devices can become another problem with a home-made <select>
element. Mobile devices often mean touch events. There is no more mouse. There is no more keyboard. Now there is a finger.
In most cases, making a custom dropdown accessible for mobile users will take no more than just a few lines of CSS. Basically it requires to change all the hover states by focus states to make things work.
But making things work is not always enough. Mobile browsers have a very efficient way to handle select dropdowns natively enabling scrolling gestures. When facing a <select>
with dozens of options like a dropdown to pick your country, having a mobile-friendly UI can make the difference between a user who buy/subscribe and a user who leave.
In most cases, as a developer you will use a <select>
element because you want your users to pick an option; option that you will want to use for your database, your email, or whatever.
Since the <select>
element is a form element, it comes with a name attribute and the ability to send POST or GET data through a form. This means you can access the selected option by no more than $_POST['name-of-select-element']
in PHP. With JavaScript, it will probably be something like document.getElementById('name-of-select-element').value;
.
Fine. Now let’s do this with CSS only. Uh-ho, not possible. If you’re clever enough, you’ll come up with a solution involving hidden radio inputs within your list items. Sounds fair enough; so… you end up using multiple form elements… not to use a form element. Right?
Let’s say you don’t mind the extra-processing that comes with the multiple radio buttons compared to the regular <select>
element…
… what if you want to give your user the ability to select multiple options? Okay, you could still use checkboxes, that sounds legit.
Then let’s talk about other options like: required
, disabled
and autofocus
.
I can think of a workaround for disabled
with a class on the parent element, using pointer-events to disable clicking on items. Okay.
If you come up with a CSS-only solution to force the user to select an option by preventing form submit and displaying a warning message instead, I’d be more than glad to hear it!
You could still use JavaScript. But then:
<select>
elementEven if it’s not much a concern, using a HTML/CSS “alternative” to the <select>
element means using at least a dozen of DOM nodes (quickly ramping up with the number of options) and maybe about 50 lines of CSS, perhaps including some heavy CSS properties like shadows or gradients.
Okay, it’s no big deal when you know the average page size is a little over 1.4Mb (according to HTTP Archive).
But still, you could have used a single element (including Shadow DOM) and 0 line of CSS for a result which beats your alternative on all points except on design (and this is yet to be determined).
Browser makers spend countless hours building native support for a lot of things in order to improve both user’s experience and developer’s life. Use these native features.
Please, don’t screw accessibility, performance and usability for design purpose. Those things should always come first.
]]>I’ve to say it’s been a real pleasure to do this, mostly because I’ve learnt literally a ton of stuff. Some people say the best way to learn is through teaching, I can say it’s mostly true.
Anyway, if perspective
and perspective-origin
have been quite easy to do, I must say grid
has been a whole another story. This is by far the most complicated thing I have ever seen in CSS. Let me introduce the topic.
The CSS Grid Layout is currently a W3C Working Draft aiming at fixing issues with older layout techniques by providing a better way to achieve complex interface design. Indeed, each solution we (have) use(d) to make web pages has at least a flaw:
The CSS Grid Layout consists on defining a 2-dimensional grid in which the children can be positioned as desired. The main benefits of this technique are:
The basic example would be something like this: my .wrapper
is my grid; .header
will all columns of the first row; .main
will by displayed in the second row and the first column; .sidebar
in the second row, second column; and .footer
in the third row, all columns.
First, reading specifications. If a spec author ever reads this, I am sorry; but the specifications are definitely not for random people. I believe they are mostly made for browser makers, and they are probably very well writen but for a person like me, it’s way too complicated. Unfortunately, I had to dig deep into the spec.
What has been difficult as well is that the only supported browser — as of writing — is Internet Explorer 10 (mostly because 3 of 5 authors of the Grid spec are from Microsoft). And I believe they started implementing the module in their browser engine a while ago, resulting in some inconsistencies regarding the spec which keeps moving.
Not only their implementation is at a very early stage (about half the spec is currently supported), but it also differs from the spec at some point. Among other things:
grid-rows
and grid-columns
have been renamed in grid-definition-rows
and grid-definition-columns
grid-row
is supposed to be a shorthand for grid-row-position
and grid-row-span
. The current implementation in Internet Explorer 10 for grid-row
should be the one for grid-row-position
(which isn’t supported). Same goes for grid-column
.This kind of stuff definitely doesn’t make things easier.
Otherwise, the module is quite complicated by itself. It involves about 15 new properties, a new unit, and more important: a whole new way of thinking. Fortunately, the currently supported part of the spec is quite easily understandable and it has been very fun to play around with.
What I’ve found astonishing is the very little amount of required CSS to achieve a complex layout. I counted: with no more than 10 lines of CSS, I’ve been able to make a 3-columns layout including 2 fixed-size columns, with full-width header and footer. Oh, and source order independant. Please have a look at the following markup:
<div class="wrapper">
<article class="main">My awesome content here</article>
<footer class="footer">Some informations here</footer>
<header class="header">My site title goes here</header>
<aside class="sidebar">Here is my side content</aside>
<aside class="annexe">Some more side content</aside>
</div>
Now the CSS. Pay attention to the number of lines:
.wrapper {
display: grid;
grid-columns: 200px 15px 1fr 15px 100px;
grid-rows: (auto 15px) [2] auto;
}
.header,
.footer {
grid-column-span: 5;
}
.sidebar,
.main,
.annexe {
grid-row: 3;
}
.header {
grid-row: 1;
}
.footer {
grid-row: 5;
}
.sidebar {
grid-column: 1;
}
.main {
grid-column: 3;
}
.annexe {
grid-column: 5;
}
Done. 10 lines. No float. No inline-block. No height. No width. No margin. And if you want to make everything nice on small devices, it will take no more than a couple of more lines (8 in this example).
Note: I won’t explain the syntax in this article. If you want to understand how works the Grid Layout, please have a look at CSS-Tricks' Almanac entry.
Are Flexbox and Grid both solutions to the same problem or do they both have their own use case?
— @Lezz
This question comes from Twitter. However I’ve been questioning myself regarding this while making the entry for CSS-Tricks. Let’s have a look at both specifications:
The Flexbox specification describes a CSS box model optimized for user interface design. In the flex layout model, the children of a flex container can be laid out in any direction, and can “flex” their sizes, either growing to fill unused space or shrinking to avoid overflowing the parent. Both horizontal and vertical alignment of the children can be easily manipulated. Nesting of these boxes (horizontal inside vertical, or vertical inside horizontal) can be used to build layouts in two dimensions.
Grid Layout contains features targeted at web application authors. The Grid can be used to achieve many different layouts. It excels at dividing up space for major regions of an application, or defining the relationship in terms of size, position, and layer between parts of a control built from HTML primitives. Like tables, the Grid enables an author to align elements into columns and rows, but unlike tables, the Grid doesn’t have content structure, and thus enables a wide variety of layouts not possible with tables. For example, the children of the Grid can position themselves with Grid lines such that they overlap and layer similar to positioned elements. In addition, the absence of content structure in the Grid helps to manage changes to layout by using fluid and source order independent layout techniques. By combining media queries with the CSS properties that control layout of the Grid and its children, authors can adapt their layout to changes in device form factors, orientation, and available space, without needing to alter the semantic nature of their content.
So as I understand this, the Grid layout is “macro” while the Flexbox module is “micro”. I think Grid will be perfect to organize the layout structure with high-level elements whereas Flexbox will be best-suited for some modules that require specific alignments, ordering and so like a fluid navigation for example.
For having played with the module for hours, I can tell it is quite promising. I have been amazed by its efficiency, and I even could try to mix it with CSS preprocessors: it rocks. The fact it’s fully number-based makes it very easy to use in loops, mixins and functions.
Unfortunately, it is way too soon to use the Grid layout in a real-life project, especially since the browser support is restricted to Internet Explorer 10. However, I’ve heard the support is coming to Firefox and Chrome nightly builds, so I think we will be able to safely play around with it in a few months from now.
Then let’s hope in a year from now, the browser support will be great in all modern browsers (Chrome, Firefox, Opera, IE10+, including some mobile browsers) giving us the ability to use it in projects that don’t aim at old browsers.
Meanwhile, you can still experiment with it on Internet Explorer. Here are a couple of useful resources on the topic:
]]>left
and top
accordingly to position everything around the circle.
But in most cases, you would have ended doing this with JavaScript, or jQuery. There are plenty of plugins doing this out there, and no doubt they are all good.
But what if you could do it very simply with CSS? That’s what Ana Tudor did in an answer on StackOverflow. Instead of using basic positioning, she relies on chained CSS transforms to do it. God, this is brilliant. Well? Let’s push it further.
Ana’s work is great, I’m not questioning this. However, adding or removing elements can be tricky. Before going any further, let’s see how she does this:
[…] You then decide on the angles at which you want to have your links with the images and you add a class deg{desired_angle} (for example deg0 or deg45 or whatever). Then for each such class you apply chained CSS transforms, like this:
.deg{desired_angle} {
transform:
rotate({desired_angle})
translate({half_parent_size})
rotate(-{desired_angle});
}
…where you replace {desired_angle}
with 0
, 45
, and so on…
The first rotate transform rotates the object and its axes, the translate transform translates the object along the rotated X axis and the second rotate transform brings back the object into position - demo to illustrate how this works.
Because Ana adds specific classes to HTML elements, it’s not very fast to add or remove an element. It requires to add the according class to the new element, and change the name + CSS of all other classes to distribute evenly all items along the circle. Bummer.
I was pretty sure I could do something cool and easy with Sass. Indeed, I ended with a mixin handling all the positioning automagically. Plus:
li
, div
, span
, a
, img
, whatever)Here are the arguments you can pass to the mixin in order to suit your needs:
$nb-items
(integer): this is the number of items you want to distribute along the circle$circle-size
(length): this is the size of your circle$item-size
(length): this is the size of an item$class-for-IE
(string|false) (optional): class used as a fallback for pseudo-selectors (default is false, meaning no fallback)Thus, usage is pretty straight forward:
.my-container {
/**
* With no support for old IE
*/
@include distribute-on-circle(
$nb-items: 8,
$circle-size: 24em,
$item-size: 6em
);
/**
* With support for old IE
* Using class “item” (.item1, .item2, .item3, etc.)
*/
@include distribute-on-circle(
$nb-items: 8,
$circle-size: 24em,
$item-size: 6em,
$class-for-IE: 'item'
);
}
If the number of items in the container is superior to the parameter given in the mixin, left children are nicely stacked on top of each other at the center of the parent, not breaking anything.
It’s pretty easy. It divides 360°
by the number of elements you ask for to compute the angle between 2 items. Then, it runs a @for loop using pseudo-selectors (:nth-of-type()
) to assign the appropriate transforms to each element.
$rot: 0; /* Rotation angle for the current item */
$angle: (360 / $nb-items); /* Angle between two items */
@for $i from 1 through $nb-items {
&:nth-of-type(#{$i}) {
transform: rotate($rot * 1deg)
translate($circle-size / 2)
rotate($rot * -1deg);
}
// Increments the `$rot` variable for next item
$rot: ($rot + $angle);
}
Outputs (with 8 items and a 24em
large container)…
.container > *:nth-of-type(1) {
transform: rotate(0deg) translate(12em) rotate(-0deg);
}
.container > *:nth-of-type(2) {
transform: rotate(45deg) translate(12em) rotate(-45deg);
}
.container > *:nth-of-type(3) {
transform: rotate(90deg) translate(12em) rotate(-90deg);
}
.container > *:nth-of-type(4) {
transform: rotate(135deg) translate(12em) rotate(-135deg);
}
.container > *:nth-of-type(5) {
transform: rotate(180deg) translate(12em) rotate(-180deg);
}
.container > *:nth-of-type(6) {
transform: rotate(225deg) translate(12em) rotate(-225deg);
}
.container > *:nth-of-type(7) {
transform: rotate(270deg) translate(12em) rotate(-270deg);
}
.container > *:nth-of-type(8) {
transform: rotate(315deg) translate(12em) rotate(-315deg);
}
The main problem with this technic is that IE8- doesn’t support pseudo-selectors and CSS transforms.
The first thing is easily fixed either with a plugin like Selectivizr to enable support for pseudo-selectors on old browsers or a little bit of JavaScript to add a numbered class to each child of the parent. Here is how I did it (with jQuery):
var $items = $('.parent').children()
$items.each(function () {
var $item = $(this)
var index = $item.index() + 1
$item.addClass('item' + index)
})
Then, the CSS would be slightly altered:
@for $i from 1 through $nb-items {
&.#{$class-for-IE}#{$i} {
/* … */
}
}
First problem solved. Not let’s deal with the biggest one: IE8- doesn’t support CSS transforms. Hopefully, we can draw a fallback that will make everything cool on these browsers as well using margin.
Basically, instead of rotating, translating then rotating back each element, we apply it top and left margins (sometimes negative) to place it on the circle. Fasten your belt folks, the calculations are pretty insane:
$margin-top: sin($rot * pi() / 180) * $half-parent - $half-item;
$margin-left: cos($rot * pi() / 180) * $half-parent - $half-item;
margin: $margin-top 0 0 $margin-left;
Yes, it’s definitely not the easiest way to do it as it involves some complicated calculations (thanks Ana for the formulas), but it works like a charm!
Now how do we use all this stuff for IE8- without messing with modern browser stuff? I found that the easiest solution is to add a flag to the mixin: if it’s turned on, then it means we need to support old IE, thus we use classes and margins. Else, we use transforms and pseudo-selectors. Consider the following structure:
@mixin distribute-on-circle(
$nb-items,
$circle-size,
$item-size,
$class-for-IE: false
) {
/* … */
@for $i from 1 through $nb-items {
@if not $class-for-IE {
&:nth-of-type(#{$i}) {
/* Use transforms */
}
} @else {
&.#{$class-for-IE}#{$i} {
/* Use margins */
}
}
}
}
Et voila! We now have a mixin working back to IE7 (maybe even 6) thanks to very little JavaScript. Isn’t that nice?
That’s all folks! If you have any suggestion to improve it, please be sure to share! Meanwhile, you can play with my demo on CodePen.
Check out this Pen!
]]>It occurred to me there was a couple of Compass features which remain pretty much unknown to most users so I thought it could be a good idea to write a short blog post about them.
Compass defines 5 CSS constants: top
, right
, bottom
, left
and center
.
The point of these inalterable variables is to use the opposite-position()
function which returns the opposite value for each constant. Please consider the following example:
$direction: left;
$opposite: opposite-position($direction); /* Outputs “right” */
$position: top right;
$opposite: opposite-position($position); /* Outputs “bottom left” */
Note: the opposite of center
is center
.
I personally used this extension in this very site, when it comes to images and quotes pulling (L32 and L47).
@mixin pull-quote($direction) {
$opposite: opposite-position($direction);
text-align: $opposite;
float: $direction;
margin: 0 0 0.5em 0;
margin-#{$opposite}: 1em;
border-#{$opposite}: 6px solid hotpink;
padding-#{$opposite}: 1em;
}
So $opposite
equals right when $direction
is left and vice versa. Allows me to make only one mixin instead of 2!
Element-of-type() is a built-in function to detect the display type of an element: block
, inline
, inline-block
, table
, table-row-group
, table-header-group
, table-footer-group
, table-row
, table-cell
, list-item
and -as odd as it may look- html5
, html5-inline
and html5-block
.
Note: html5
, html5-inline
and html5-block
are not real display values; they are just keywords to gather all html5 elements (inline, block or both).
This may be useful as part of a CSS reset for example:
@mixin reset-html5 {
#{elements-of-type(html5-block)} {
display: block;
}
}
This snippet forces all HTML5 elements to be displayed as block-elements, even by unsupported browsers.
Experimental has to be one of the most used function in Compass and probably one of the less known at the same time.
Basically, experimental()
allows you to define mixins outputing content depending on enabled vendor prefixes. It is used in all CSS3 built-in mixins. When you use @include box-sizing(border-box)
, here is what happen:
@mixin box-sizing($bs) {
$bs: unquote($bs);
@include experimental(box-sizing, $bs, -moz, -webkit, not -o, not -ms, not -khtml, official);
}
This outputs:
.element {
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
box-sizing: border-box;
}
-o-
, -ms-
(and -khtml-
) are omitted because the box-sizing()
mixin calls experimental()
by specifically specifying not to output lines for Opera and Internet Explorer.
Now what’s the point of such a tool? As an example, there is no default mixin for CSS animation in Compass. Let’s make one!
@mixin animation($content) {
@include experimental(animation, $content, -webkit, -moz, not -o, not -ms, official);
}
.element {
@include animation(my-animation 3s ease);
}
This outputs:
.element {
-webkit-animation: my-animation 3s ease;
-moz-animation: my-animation 3s ease;
animation: my-animation 3s ease;
}
Hum, hacks. I know what you think: NOOOOOO!. Anyway, Compass provides a couple of features to take advantage of Internet Explorer inconsistencies and weaknesses.
You may have already heard of has-layout
. This article explains it way better than I could:
“Layout” is an IE/Win proprietary concept that determines how elements draw and bound their content, interact with and relate to other elements, and react on and transmit application/user events. This quality can be irreversibly triggered by some CSS properties. Some HTML elements have “layout” by default. Microsoft developers decided that elements should be able to acquire a “property” (in an object-oriented programming sense) they referred to as hasLayout, which is set to true when this rendering concept takes effect.
Back to our business: Compass gives two ways to trigger hasLayout
on elements: with zoom
(using the zoom
MS proprietary property) or with block
(using the display
property). I’d go with the zoom, even if it doesn’t validate mostly because I’m used to.
.element1 {
@include has-layout(zoom);
}
.element2 {
@include has-layout(block);
}
Outputs…
.element1 {
*zoom: 1;
}
.element2 {
display: inline-block;
}
.element2 {
display: block;
}
You now understand why I use the zoom approach. Otherwise, Compass also provides another way to target IE6 specifically called the bang hack. It relies on the inability for IE6 to understand the !important
flag:
.element1 {
@include bang-hack(color, red, blue);
}
Outputs…
.element1 {
color: red !important;
color: blue;
}
Since IE6 doesn’t understand !important
, it will apply the later declaratation while other browsers will follow the hammer bash flaged rule.
Compass gives you a way to know the dimensions of an image (given as a path) with 2 functions: image-width()
and image-height()
.
.element {
$image: 'my-awesome-background.jpg';
background: url($image);
width: image-width($image);
height: image-height($image);
}
In this example, the element will have a size relative to the background-image it uses.
Note: beware, the path has to be relative to your project’s image directory, defined in your config.rb
file.
If you’re like a total nerd and want to do CSS with math, then you’ll be pleased to know Compass has a bunch of built-in math functions like sin()
, cos()
, pi()
among a few others.
I once had to use sin()
in order to make a mixin for a pure CSS 6-points star but that’s pretty much it. If you happen to have a real live use case for one of those functions, I’d be more than pleased to know more about it.
$n: 4;
$pow: pow($n); /* Returns 16 */
$sqrt: sqrt($n); /* Returns 2 */
Compass provides some features to play with selectors like nest()
, append-selector()
or headings()
.
Once again, I am not sure if there are a bunch of real life use cases for such functions. Let me show you with a demo, maybe you’ll be able to find a use case:
/* nest() */
nest(".class1", ".class2");
/* Outputs ".class1.class2" */
nest(".class1, .class2", ".class3");
/* Outputs ".class1.class3 .class2.class3" */
/* append-selector */
append-selector(".class1", ".class2");
/* Outputs ".class1.class2" */
append-selector("a, p, li", ".class");
/* Outputs `a.class, p.class, li.class` */
/* headings() */
#{headings()} {
font-family: 'My Awesome Font';
/* Set font-family to all headings */
}
#{headings(1, 3)} {
font-weight: bold;
/* Set font-weight to h1, h2, h3 */
}
Compass provides several resources to ease a daily task: image replacement. When you have an element with text content, but you want the text to disappear to see the background image instead.
.element {
@include hide-text(right);
}
Outputs…
.element {
text-indent: 110%;
white-space: nowrap;
overflow: hidden;
}
The hide-text()
mixin takes a direction as a parameter. The default one is left
, resulting in a text-indent: -199988px
with a 16px
baseline. You definitely should use right
for better performance.
So people, how many of these features did you know? If you have some free time, I highly recommand you to dig into Compass documentation. You’d be surprised how little you know about the framework in most cases.
]]>Everybody loves relative units. They are handy and help us solve daily problems. However the most used one (em
) presents some issues, especially when it comes to nesting.
As an example, setting both p
and li
tags font-size to 1.2em
may seem fine. But if you ever happen to have a paragraph inside a list item, it would result in a font-size 1.44 times (1.2 * 1.2) bigger than parent font-size, and not 1.2 as wished.
To avoid this, a new unit has been created: rem
. It stands for root em. Basically, instead of being relative to the font-size of its direct parent, it’s relative to the font-size defined for the html
element.
You may have already seen something like this in frameworks, demo, blog posts and such:
html {
font-size: 62.5%;
}
body {
font-size: 1.6rem;
}
Because all browsers have a default font-size of 16px
, setting the font-size to 62.5% on the html element gives it a font-size of 10px (10 / 16 * 100 = 62.5) without explicitely setting it to 10px
which would prevent zooming. Then, setting a font-size of 1.6rem on the body element simply results in a font-size of 16px
, cascading through the whole DOM tree.
Then, if I want an element to have like a 28px
font-size, I simply have to do .element { font-size: 2.8rem; }
, no matter the size of its parent.
Everything is great, however rem isn’t supported in all browsers, especially not in Internet Explorer 8, which is still required in most projects. It means we have to give a fallback for this browser.
Having to define twice the font-size property everytime you have to set the size of a text element sucks. This is the moment you’d like to have a wonderful mixin handling everything for you. Well, WISH GRANTED!
There are already many mixins handling px
fallback for rem
usage, most of them do it very well. However this one pushes things a step further. It is inspired by this rem mixin by Hans Christian Reinl, revamped by myself to make it even more awesome. Here are the features:
px
or rem
as an input value10px 20px
(for padding or margin as an example)html {
font-size: 62.5%; /* 1 */
}
@function parseInt($n) {
/* 2 */
@return $n / ($n * 0 + 1);
}
@mixin rem($property, $values) {
$px: (); /* 3 */
$rem: (); /* 3 */
@each $value in $values {
/* 4 */
@if $value == 0 or $value == auto {
/* 5 */
$px: append($px, $value);
$rem: append($rem, $value);
} @else {
$unit: unit($value); /* 6 */
$val: parseInt($value); /* 6 */
@if $unit == 'px' {
/* 7 */
$px: append($px, $value);
$rem: append($rem, ($val / 10 + rem));
}
@if $unit == 'rem' {
/* 7 */
$px: append($px, ($val * 10 + px));
$rem: append($rem, $value);
}
}
}
@if $px == $rem {
/* 8 */
#{$property}: $px; /* 9 */
} @else {
#{$property}: $px; /* 9 */
#{$property}: $rem; /* 9 */
}
}
This may be a bit rough so let me explain it:
10px
$values
auto
or 0
, we append it to the list as-ismargin-top: 0
)Thanks to Moving Primates to improve the mixin by adding step 8. ;)
Using it is pretty straightforward:
html {
font-size: 62.5%;
}
body {
@include rem(font-size, 1.6rem);
@include rem(padding, 20px 10px);
}
… outputs:
html {
font-size: 62.5%;
}
body {
font-size: 16px; /* Fallback for IE8 */
font-size: 1.6rem;
padding: 20px 10px; /* Fallback for IE8 */
padding: 2rem 1rem;
}
There are still some issues with this mixin:
$baseline
parameter to the mixinparseInt()
function; I’ve proposed it to Compass, let’s hope they add it anytime soonIf you ever happen to find a decent solution to fix one, I’ll be glad to know and add it!
That’s pretty much it folks. I’d be glad to hear your opinion on this and improve it with your ideas. :)
If you want a playground to test and hack, please feel free to fork my pen.
]]>A couple of days ago, the famous french frontend developer Vincent De Oliveira has written a blog post called Why I don’t use CSS preprocessors (Pourquoi je n’utilise pas les préprocesseurs CSS). If you can read French, or stand Google Translate, then I highly recommand you this article, full of good points and interesting opinions.
Please don’t consider this post as an answer to Vincent’s one. I just wanted to share my opinion on the topic, not open a flame war. Especially since I like him. :)
There is no point debating about whether preprocessors are good or evil: they are good. If you think they are evil, it’s either because you’re afraid of them, or because you suck at them. The question isn’t even which one to choose: they all do pretty much the same things (even if Sass is slightly more robust than others). The main topic is: should or shouldn’t you use one?
There are cases where you don’t want to use a preprocessor (whatever the language). The main case is when your team involves some beginners or inexperienced developers: if they are not very comfortable with the language, it will be even worse with a preprocessor.
The other case is when you are dealing with very small projects or one-shot websites, meaning you don’t plan on updating often. Then, a preprocessor isn’t that useful.
Let’s make things clear right now: preprocessors don’t output bad code, bad developers do. CSS preprocessors -whatever the one you (don’t) use- do not generate top-heavy, unwiedly, unnecessarily complicated code. This is a lie bad developers will tell you to explain the quality of their code.
If the final stylesheet is less maintainable or heavier, or more complicated than the vanilla CSS version you had before using a preprocessor, it’s because you messed up. Not because of Sass.
Vincent does an interesting comparison with PHP (HyperText Preprocessor): you can output shitty code with PHP too. Is it because of PHP? Definitely not. It’s because you’ve messed up.
Some people say preprocessors don’t make you write CSS faster. Indeed, you won’t become Iron Man as soon as you run Sass, definitely not. Even if in the end, you write code slightly faster; simply by the fact you don’t have to write vendor prefixes for example.
You don’t save much time while coding. You save time when it comes to maintain and update your stylesheets. It’s a no brainer. This also means if you don’t plan on updating your site, then there is less point in using a preprocessor. This makes me come to the next argument.
I think the key word here is maintainability. You will never ever reach the same level of maintainability without a CSS preprocessor. Ever.
However, you might not need that level of maintainability. As Kaelig says in his article CSS preprocessors: renounce by choice or ignorance? (Préprocesseurs CSS, renoncer par choix ou par ignorance?): if you work on small projects or one-shot websites, you may not need a preprocessor. Let’s be realistic for a minute: you won’t update the site everyday, if at all. If you ever happen to do so, you can dig into the code without having to use of a preprocessor.
Vincent says preprocessors don’t add anything to the default language. In a sense, yes. Sass isn’t magic. CoffeeScript isn’t magic. Markdown isn’t magic. In the end, they render CSS, JS and HTML.
But CSS preprocessors give CSS what CSS lacks of. CSS lacks of variables, above all. CSS possibly lacks of simple nesting for pseudo-classes. CSS might lack of functions and mixins. Preprocessors give developers all this stuff. Without altering performances.
Yes, we can do sites without these features. It’s just nice to have them. Saying otherwise would be a big fat lie. But of course we can still make sites without preprocessors.
In fact, I don’t need a preprocessor. I say it: I don’t. I’m not working on 10000-lines stylesheets. I’m not working on multiple templates websites. I’m not working on complex CSS architectures. I could do every single project I do without Sass.
But Sass looks better than CSS to me (at least in most cases). I like being able to use variables. I like being able to use mixins and functions. I like being able to use Compass. I like all of this stuff, even if I don’t necessarily need it. It feels more normal to me.
Sass also gives multiple stylesheets concatenation and file minification (among others), which is kind out of the CSS range but still awesome features nevertheless.
Preprocessors make CSS more complex. […] I said more “complex” not more “complicated”. You can think preprocessor’s syntax is simple, it is still more complex than the default one.
Vincent is definitely right on this one. Preprocessors sometimes make the syntax more complex by adding new features; not necessarily more complicated, simply more complex (no pun intended).
One of the biggest concerns when talking about CSS preprocessors (and preprocessors in general) is the learning curve. Most try to stay as close as possible to the default syntax but they involve new features with their own syntax, which need to be learnt. Yes, it needs some time to wrap one’s head around a preprocessor, especially if it involves a very different syntax from the original language (Sass, CoffeeScript).
If you happen to be a beginner or work with inexperienced developers, you probably shouldn’t use preprocessors. Someone who’s not very comfortable with a language could do pretty bad things with a preprocessor. Adapt your tools to your team.
In the end, most arguments against preprocessors are bullshit. All those things about not speeding up the development, outputing bad code, it’s irrelevant. Most people telling you this are the one who have not even tried to use a preprocessor for real.
The only thing to ask is: can I afford one? If you think you or one of your co-workers won’t be able to handle everything correctly, then the answer is no. Otherwise just please yourself and go ahead. :)
]]>The wonderful Manoela Ilic also known as Mary Lou, co-founder of Codrops has accepted to answer a few questions. Below is her interview. Enjoy!
I’m Manoela, a 31 year old web designer and developer and I create things for Codrops since 2009. I studied Cognitive Science in Germany and then Computational Logic in Portugal.
I worked in a software company for a while before I decided to become a freelancer and launch Codrops. Since I was a kid I was always fascinated with computers and I created my first website when I was 16 (it had some fancy Flash buttons, I remember) :)
In my personal life I like to travel a lot (in fact, most of the time I am travelling). I love to eat and make good food, drink a great wine and take care of my balcony herb garden whenever I have some spare time.
I set up a WordPress blog in late 2009 thinking that I could share some useful snippets with fellow developers. I was doing some beginner mobile web development back then and I just wanted to share what I learnt and what I thought could be helpful. Snippets turned into tutorials over time and now Codrops turned into an almost full-time job :)
What I do is to plan, design, implement and write tutorials together with my partner Pedro. I also manage the blog and review articles by our writers.
WordPress seemed like the most fitting blog engine at the time and I’ve been happy with it ever since. I love the community around it. All the development and implementation was done by me and Pedro.
I guess the most challenging but also most exciting part is to come up with interesting and original ideas and concepts that are somehow inspiring and helpful to web designers and developers. At Codrops we try to provide ideas and new perspectives that serve as a source of inspiration. So we always create a story for what we want to show and setting up that story is the most delicate part.
There are always things that could be done better when looking back. But in general I am quite happy with how Codrops turned out thanks to our readers and I wouldn’t want to change anything about that I guess. It’s just like with everything else in life: if you wouldn’t have done it that exact way, you might have not learned what you know now. And if you are happy with what you’ve learned, it probably was a good path to choose.
There are many things that we want to add to and improve at Codrops. We are currently working on some new sections that we want to release this year. And we are of course planning to do more tutorials and provide more articles that will be interesting and useful to our readers.
We currently have about 850.000 unique visitor and 9.5 million pageviews every month. Our readers spend an average of 4.25 minutes on Codrops and they view about 6 pages per visit. Almost half of them are from the United States. We have 512 published posts and more than 20.000 comments. In total, we have 45 authors, most of whom were guest authors with a single contribution.
The most successful article was Original Hover Effects by Alessio Atzeni.
Yes, I have some side projects that I’m working on and I also work for some clients. If I had more time I would definitely spend it all on answering our reader’s questions and help them with their problems.
I’d like to thank you for the opportunity and I’d like to thank all the readers of Codrops for their support. I’d also like to wish you all the best for your website and blog and I’m looking forward to read many of your articles :)
Well, thank you very much for your time Manoela! Wish you the best for both your work at Codrops and your personal life. Oh, and congratulations for being part of the 2013 Awwwards jury! :)
]]>You know what would be awesome? Another CSS grid system!
— No one ever.
In this era of multi devices, responsive design, frameworks and all this stuff, CSS grids have become more and more popular. The main purpose of these tools is to define a baseline in order to have a consistent and predictable layout in all situations.
This is a good idea, even an important one. Being consistent on all devices is a big deal, and CSS grids really help to figure this out.
So in the last few months, we have assisted to a bunch of new CSS grid systems, including Twitter Bootstrap Grid module, Zurb Fundation, 960.gs, The 1140px Grid, Blueprint, KNACSS, YAML, Ingrid, Golden Grid System, InuitCSS Grid module, Toast and I probably forget a bunch of those.
They all are great grid systems. This makes me get to the point…
Sad but true. We do not need your CSS grid, framework or whatever you like to call it. There are already too much, some of them are built by professional, teams and CSS architects, which means they will always be better than yours.
Now don’t get me wrong: building your own is a good thing. But people don’t need it. Ask yourself this: between all the existing grids, including ones with hundreds of closed issues, why would I chose yours?
Unless you’re coming up with something really interesting and innovative like Trevor Davis did with his Diamond Grid, or Harry Roberts with CSSWizardry-Grids, there is no reason people should pick yours among all the others. Sorry.
But wait… There’s more.
Yes. Even you, do not need your own grid system. At least in most cases. Let’s think about it: either your project truely need a grid and you’d better go with an existing one which had proven to be reliable, or you project is small enough to not need one thus you don’t even need your own.
Given this postulate, we can ask ourself why are there that many grid systems? Because it’s fun to do, especially when you’re building it upon a CSS preprocessor. It is a very good exercise to practice CSS skills and preprocessor learning.
You may be familiar with the KISS principle which says things are generally better when they are simple. This works for CSS grids too.
If you are working on a simple layout, even a responsive one, you’ll find that you can do things by hand without much hassle. I was first using 1140px Grid on this site until I realized it weigh a few kilobytes for what could be done in about 6 or 7 lines of CSS.
Once again, don’t get me wrong. I don’t say grids are bad, or shouldn’t be used. I’m just warning you from using a grid when you don’t need one. It can do more harm than good sometimes.
If you haven’t read this good article by Chris Coyier about not overthinking grids, then you definitely should.
Just don’t overthink grids. Unless you’re building a fully fluid responsive 3+ columns layout, try doing your CSS architecture without loading thousand of bytes.
Keep it simple.
]]>First, let me remind you what Arley did in his experiment, topic of a great article at CSS-tricks. His idea was to change some content according to the screen size.
In order to do that, he used a pseudo-element and filled the content
property accordingly. With about 160 media query calls, he managed to change the content every 10px from 1920px to 300px (device width).
Check it live on his website home or at CSS-tricks.
g idea, really. It works great, it looks great, the only downside is… it’s a pain in the ass to code.
This is where Sass — or any CSS preprocessor really — can be very efficient. It took me about 10 minutes to divide the amount of required code by 4. Plus, it makes everything so much easier to adapt and maintain.
Check out this Pen!
If you simply want to see the code and don’t care much about how I did it, please check this CodePen (fullsize here) and resize your browser like a fucking obsessive.
Okay, this is no magic. I had to write all the words Arley used all over again. I guess I could write a little JavaScript that would have parsed Arley’s stylesheet then making a list of it but it would have been ever more time consuming.
So basically I created a Sass list containing all words ordered from the longest to the shortest. Hopefully, Arley already did this part of the job before me so I didn’t have to do it again.
$words: 'Unconventional', 'Flabbergasting', 'Scintillating', 'Extraordinary',
'Unforgettable', 'Unpredictable', 'Dumbfounding', 'Electrifying',
'Overwhelming', 'Incomparable', 'Entertaining', 'Magnificient', 'Confounding',
'Resourceful', 'Interesting', 'Adventurous', 'Bewildering', 'Astonishing',
'Fascinating', 'Outstanding', 'Influential', 'Imaginative', 'Nonsensical',
'Stimulating', 'Exceptional', 'Resplendent', 'Commanding', 'Determined',
'Remarkable', 'Incredible', 'Impressive', 'Perplexing', 'Passionate',
'Formidable', 'Stupefying', 'Refreshing', 'Delightful', 'Incredible',
'Innovative', 'Monumemtal', 'Surprising', 'Stupendous', 'Staggering',
'Delectable', 'Astounding', 'Responsive', 'Courageous', 'Outlandish',
'Marvelous', 'Whimsical', 'Versatile', 'Motivated', 'Brilliant', 'Eccentric',
'Wonderful', 'Excellent', 'Thrilling', 'Inspiring', 'Exquisite', 'Inventive',
'Colourful', 'Delicious', 'Fantastic', 'Audacious', 'Dexterous', 'Different',
'Confident', 'Enthused', 'Peculiar', 'Glorious', 'Smashing', 'Splendid',
'Adaptive', 'Daunting', 'Imposing', 'Striking', 'Charming', 'Dazzling',
'Engaging', 'Resolute', 'Intrepid', 'Dramatic', 'Original', 'Fearless',
'Flexible', 'Creative', 'Animated', 'Puzzling', 'Shocking', 'Intense',
'Elastic', 'Pointed', 'Unusual', 'Devoted', 'Amusing', 'Radiant', 'Refined',
'Natural', 'Dynamic', 'Radical', 'Bizarre', 'Curious', 'Amazing', 'Lively',
'Modest', 'Mighty', 'August', 'Unique', 'Absurd', 'Brazen', 'Crafty',
'Astute', 'Shrewd', 'Daring', 'Lovely', 'Nimble', 'Classy', 'Humble',
'Limber', 'Superb', 'Super', 'Ready', 'Crazy', 'Proud', 'First', 'Light',
'Alert', 'Lithe', 'Fiery', 'Eager', 'Quick', 'Risky', 'Adept', 'Sharp',
'Smart', 'Brisk', 'Fresh', 'Swift', 'Novel', 'Giant', 'Funky', 'Weird',
'Grand', 'Alive', 'Happy', 'Keen', 'Bold', 'Wild', 'Spry', 'Zany', 'Nice',
'Loud', 'Lean', 'Fine', 'Busy', 'Cool', 'Rare', 'Apt', 'Fun', 'Hot', 'Big';
Pretty big, right? Don’t worry, the worst part is over. Now it’s all about easy and interesting stuff.
One loop to rule them all,
One loop to bind them,
One loop to bring them all,
And in the Sass way bind them.
Now we have the list, we only have to loop through all items in it and do something, right?
$max: 1910px; /* [1] */
.be:after {
@each $word in $words {
/* [2] */
@media screen and (max-width: $max) {
/* [3] */
content: 'Be #{$word}.'; /* [4] */
}
$max: ($max - 10); /* [5] */
}
}
content
property,Please note we also could write it this way:
$max: 1910px;
@each $word in $words {
@media screen and (max-width: $max) {
.be:after {
content: 'Be #{$word}.';
}
}
$max: ($max - 10);
}
This outputs exactly the same thing. It’s really a matter of where you want to put the Media Query call: inside or outside the selector.
That’s pretty much it. Fairly simple isn’t it? This means we can easily add another word in the list without having to copy/paste or code anything. Simply put the word.
However if we add a couple of words, the last one will trigger under 300px device width, which gets kind of small. To prevent this, we could reverse the loop, starting from the smallest width, increasing from 10 to 10.
Thanks again to Arley McBlain for his awesome CSS experiment!
]]>Browserhacks is an extensive list of browser specific CSS (and somewhat JavaScript) hacks gathered from all over the interwebz.
There are two main reasons that lead us to create Browserhacks.
The first one is that we couldn’t find a place where all (at least many) hacks were gathered (or it was way too old; Netscape 4 says hi!). The best spot at the moment is this great blog post by Paul Irish, but it’s a) a blog post; b) there are a lot of interesting stuff in the hundred of comments nobody will ever read anymore.
Anyway, we thought it could be a good idea to get our hands a little dirty and merge all the cool sources we could find on the topic into a lovely tool.
The other reason is that we wanted to do something together for quite a while now and it was a very good opportunity to do it! So we did.
We is the short for a group of 6 persons gathered under the sweet name of 4ae9b8. How cool is that name, right?! Anyway, we are:
Even if in this project, Tim and I did most of the job. However everybody has participated by giving opinion, advices and making tests. :)
It couldn’t be any simpler. If you ever happen to be stuck on a rendering bug in let’s say… Internet Explorer 7 (only an example…), you could simply:
If you don’t feel like using this because you don’t like CSS hacks (understandable), simply don’t use it. However if you start trolling, God will kill many kittens.
Browserhacks is built on a PHP/Backbone.js structure. The frontend stuff is built with Tim’s framework Crystallo and Sass.
The source code is available on GitHub. If you find a bug, want to make a suggestion or propose a hack, please open an issue in the bug tracker. Many kudos to you!
Here is what we plan on for the next version:
Hope you like it, happy hacking!
]]>As a reminder or for those of you who don’t know what Jekyll and GitHub Pages are:
Jekyll is a simple, blog aware, static site generator written on Ruby by Tom Preston Werner, GitHub co-founder. It takes a template directory (representing the raw form of a website), runs it through Markdown and Liquid converters to spit out a complete static website.
GitHub Pages are public webpages freely hosted and easily published through GitHub.
There are a couple of reasons that made me take the decision to move my perfectly-working site (or kind of) to Jekyll and GitHub Pages:
When I launched the new version of the site last November, I wanted things to be as simple as possible. No complicated Rube Goldberg machine, no heavy CMS, none of this stuff. I didn’t even want to use a server-side language.
Every time I wanted to release an article, this what I did:
.html
fileEverything was handled manually and I was pretty happy back then (what a fool…).
But soon enough I realized I couldn’t stand this any longer. Every time I had to edit a single comma in either the head, the sidebar or the footer, I had to open all the files all over again to fix it. YAAAAAAAAY!
So I tried to make things work a little better by themselves. I turned everything to PHP and used include()
for shared parts throughout all pages. It was already way better. But once again I wanted to push things further.
I created a PHP array which was kind of a database to me. It handled both the index page and the RSS feed, and allowed me to quickly show/hide an article from the site by switching a boolean. Here is what it looked like:
$articles = array(
array(
title => "Article title",
desc => "A little article description",
url => "/blog/url-of-the-article",
codrops => false,
guest => false,
status => true //public
),
…
);
It wasn’t bad at all but still wasn’t good enough. I started wondering whether or not I should get back to a real CMS like WordPress. I knew it would please me once everything would had been settled, but I also knew it would have taken weeks to get there because moving an existing site to WordPress is very complicated.
Also as a developer, I would probably have not felt very proud of running WordPress for my own site. Don’t get me wrong, WordPress works great but this site is also meant to show what I can do.
This is why I wanted another simpler option, so I asked on Twitter. A couple of people recommended either Jekyll or Octopress (which runs on Jekyll). I had already heard about it since the site redesign has been motivated by Dave Rupert’s when he moved to Jekyll.
Back then, I had a look at Jekyll and it seemed nice but overly complicated—at least to me. I am really not that smart when you put CSS aside. Anyway it seemed to be quite what I was looking for so I thought I should give a try.
I looked for tutorials to move a simple site to Jekyll and found a couple of posts explaining the whole process pretty well but the best one has to be this one from Andrew Munsell. If you can read this Andrew, thank you a billion times because I couldn’t have made it without your post. Two or three reads later, I was fucking ready to move that shit to Jekyll.
Ironically, I think this was the hardest part. You see, when I tried to install the Jekyll gem at home (Mac OS X 10.6.8) it threw me an error. It wasn’t starting well.
Thanks to a StackOverflow answer, I understood I missed some sort of component (Command Line Tool XCode or whatever) which could be downloaded on Apple’s official website. Fair enough. After 15 minutes spent trying to remember my Apple ID, I could finally download and install this thing… only to realize it requires Mac OS X 10.7 to run. Damn it.
It’s Sunday morning, I have croissants and coffee. I CAN FIGURE THIS OUT! So I tried updating software components a couple of times to finally realize not only nothing was getting updated, but that it couldn’t update the OS itself, thus I would never get Mac OS X 10.7 this way.
After a few more Google searches and mouthfuls of delicious croissant, I found the horrifying answer: Mac OS X 10.7 cannot be upgraded for free. It is $25. DAMN IT, I JUST WANT TO RUN JEKYLL!
Once again thanks to a StackOverflow answer I could install some other thing (called GCC) which would finally get rid of the error when trying to install Jekyll. Worst part over.
Edit: … kind of. I spent hours trying to install Jekyll on a Windows machine without success. It turns out the latest Rdiscount gem (required by Jekyll to compile Markdown into HTML) cannot compile due to a bug on Windows. As of writing, there is no known fix for this.
Making everything work locally was pretty easy I have to say, especially since my previous PHP architecture was kind of similar to the one I use with Jekyll today (includes, folder structure and such).
To create a blog post, here is what I have to do:
.md
)---
title: Moving to Jekyll
layout: post
---
It is pretty straight forward. If I want to disable comments, it requires no more than switching the comments
boolean to false. If it is a Codrops article, I only have to add codrops: url
. If it is a guest post, I only have to add guest: Ana Tudor
. See? Very simple.
It took me no more than a couple of hours with some motivating music to make my website run locally. Not everything was perfect (and still isn’t) but it was something.
Setting up a GitHub Pages based website couldn’t be simpler. It only consists of creating a repo named this way username.github.com
. Easy, right?
The best thing with GitHub Pages is that it is built on Jekyll. This means you can push raw Jekyll source to your repo and GitHub Pages will automagically compile it through Jekyll (on their side). This also means you only really need Jekyll the very first time to set everything up, but then—unless you plan on changing your structure everyday—you don’t really need to use Jekyll at all since GitHub does the compilation.
I could also push the compiled code to the repo but that would mean I need Jekyll everytime I want to update anything on the site. Not great, especially since I work at 4 different places.
From there, I only had to push the local Jekyll site to this repo and about 10 minutes later, the whole thing was hosted and available at kittygiraudel.github.com. Easy as a pie.
At this point, I had my site based on Jekyll running on GitHub Pages. However I didn’t want to use kittygiraudel.github.com as the main domain name but kittygiraudel.com. Meanwhile, I had my (previous) website hosted on a OVH server, kittygiraudel.com pointing to a folder on this server.
Basically I had to tell the GitHub server to serve kittygiraudel.github.com content from kittygiraudel.com, and to make kittygiraudel.com redirect at kittygiraudel.github.com.
According to GitHub Pages documentation, and a couple of posts on StackOverflow, I understood I had to create a CNAME
file at the root of the repo directing to the top-level domain I wanted to serve from (kittygiraudel.com) and set an A-record pointing to the GitHub IP from my own server.
This has been done and followed by 12 hours of worry. My site was down and I had no idea whether or not it would get back up. Since I don’t understand a thing about server stuff and DNS, I could have simply broken everything without even knowing it.
Hopefully I did everything right and the site has been back up about 12 hours after the DNS change. However some people are still facing some issues when trying to reach the site as of today. I don’t think I can do anything about it except asking them to wait or use a proxy.
Edit: I got in touch with OVH technical support. Basically they told me everything was fine. Users unable to reach the site should try clearing their cache or try from different connections.
This was probably my biggest concern when I decided to change structure and host. I knew URLs were going to change and I had no idea how to make old URLs still working. Anyway I had to. There are a couple of articles being linked to on a daily basis.
GitHub doesn’t allow .htaccess
config for obvious reasons, so I couldn’t set server-side redirects. A StackOverflow answer recommended a Jekyll plugin to handle automatic redirects through aliases but GitHub Pages compiles Jekyll in safe mode (no plugin), so it wasn’t an option either.
I opted for a very simple—yet not perfect—approach which consisted of creating HTML files at the old locations redirecting to the new files with meta tags. For example, there is a file in a /blog
folder called css-gradients.html
containing only a the basic html/head/body tag and:
<meta http-equiv="refresh" content="0;url=/2013/02/04/css-gradients/" />
Thus, trying to reach kittygiraudel.com/blog/css-gradients
(old URL) automagically redirects to kittygiraudel.com/2013/02/04/css-gradients/
. Easy peasy.
However it is not perfect since it requires me to have about 15 files like this in an unused /blog folder. I could do it because I only had 15 articles, but what if I had 300? So if anyone has a clean solution, I take it! :)
First of all, I must say I am very happy with the porting. All in all, everything has gone pretty well and the downtime hasn’t been that long. Also I am proud of having done all this by myself; kind of a big deal to me.
There are still a couple of things to take care of, like finding a way to preview articles before releasing them without having to run Jekyll but it’s nitpicking.
If you ever happen to find any bug or if you have a suggestion, please open an issue on GitHub or drop me a comment here.
]]>To be truely honest, I wasn’t impressed that much since I am pretty familiar with Ana’s work which is always amazing. If you haven’t seen her 3D geometric shapes made of pure CSS, then you definitely should.
Anyway, when I saw this I thought it could be fun to make a Sass version of it to clean the code and ease the use. Let me show you what I ended up with.
The first thing was to understand how Ana managed to achieve such a shape with a single element (and 2 pseudo-elements). Long story short: chained CSS transforms.
Basically she stacks the element and its 2 pseudo-elements on top of each other after applying several chained transforms to each of them to have the appropriate shape (a rhombus).
Instead of covering everything in here, I let you have a look at this very clear explanation by Ana herself on CodePen.
*Note: we can do it with one single pseudo-element with the border shaping trick but the hover doesn’t feel right, and without pseudo-element with linear gradients.
I quickly noticed the height and the width of the main element were different. The width is a randomly picked number (10em), but the height seemed to be computed by some calculation since it was 8.66em.
At this point, I was already able to handle a mixin to create the star, but the user had to set both the width and the height. Yet, since the height has to be calculated, it wasn’t right. How is the user supposed to know the appropriate height for the width he set?
The user couldn’t figure this out and neither could I. So I asked Ana how to compute the height of the element based on the width. After a few complicated explanations, she finally gave me the formula (explanation here).
function computeHeight(x, skewAngle) {
return Math.sin(((90 - skewAngle) * Math.PI) / 180) * x
}
Okay, this is JavaScript but it is a good start. However this returns a radian value, which is not what we want. We want degrees. So the correct function has to be this one:
function computeHeight(x, skewAngle) {
return Math.sin(90 - skewAngle) * x
}
From there, I knew how to get the height from the width, I only had to turn this into SCSS. First problem: sin(). I had never heard of any sin()
function in Sass. Damn it.
After a little Google search, I stumbled upon a not-documentated-at-all library to use advanced math functions in Sass (including sin()
, exp()
, sqrt()
, and much more). Seemed good enough so I gave it a try.
It turned out the power()
function (called in the sin()
one) was triggering a Sass error. I tried a few things but finally couldn’t make it work. So I did something unusual… Looked at the 2nd page on Google. And bam, the Holy Grail!
Compass has built-in functions for advanced math calculation including sin()
. Isn’t that great? Like really awesome? Building the Sass function was a piece of cake:
@function computeHeight($x, $skewAngle) {
@return sin(90deg - $skewAngle) * $x;
}
This worked like a charm. So given only the width, Sass was able to calculate the according height.
So everything was already working great but I forced the user to give a em-based unit which sucked. I wanted to make any unit available knowing that the computeHeight()
function requires and returns a unitless value. Basically I had to:
computeHeight()
functionI had a look in the Sass documentation and I found two related built-in function:
unitless(number)
returns a boolean wether the value has a unit or notunit(number)
returns the unit of the valueThe first is useless in our case, but the second one is precisely what we need to store the unit of the value given by the user. However we still have no way to parse the integer from a value with a unit. At least not with a built-in function. A quick run on Stack Overflow gave me what I was looking for:
You need to divide by 1 of the same unit. If you use unit(), you get a string instead of a number, but if you multiply by zero and add 1, you have what you need.
@function strip-units($number) {
@return $number / ($number * 0 + 1);
}
Do not ask me why it works or how does it work, I have absolutely no idea. This function makes strictly no sense yet it does what we need.
Anyway, at this point we can set the size in any unit we want, could it be px
, rem
, vh
, cm
, whatever.
Last but not least, Ana used the inherit hack to enable transition on pseudo-elements. She asked me if we had a way in Sass to assign the same value to several properties.
Of course we have, mixin to the rescue!
@mixin val($properties, $value) {
@each $prop in $properties {
#{$prop}: #{$value};
}
}
You give this mixin a list of properties you want to share the same value and of course the value. Then, for each property in the list, the mixin outputs the given value. In our case:
.selector {
&:after,
&:before {
@include val(width height background, 'inherit');
}
}
… outputs:
.selector:before,
.selector:after {
width: inherit;
height: inherit;
background: inherit;
}
It’s really no big deal. We could totally write those 3 properties/value pairs, but it is great to see what’s possible with Sass, isn’t it?
Here is the full code for the mixin. As you can see, it is really not that big (especially since Ana’s original code is very light).
@mixin val($properties, $value) {
@each $prop in $properties {
#{$prop}: #{$value};
}
}
@function computeHeight($x, $skewAngle) {
@return sin(90deg - $skewAngle) * $x;
}
@function strip-units($number) {
@return $number / ($number * 0 + 1);
}
@mixin star($size) {
$height: computeHeight(strip-units($size), 30deg);
width: $size;
height: #{$height}#{unit($size)};
position: relative;
@include transition(all 0.3s);
@include transform(rotate(-30deg) skewX(30deg));
&:before,
&:after {
$properties: width, height, background;
content: '';
position: absolute;
@include val($properties, 'inherit');
}
&:before {
@include transform(skewX(-30deg) skewX(-30deg));
}
&:after {
@include transform(skewX(-30deg) rotate(-60deg) skewX(-30deg));
}
}
Well people, that’s pretty much it. You have a perfectly working Sass mixin to create customized single-element 6-point stars in CSS. Pretty neat, right?
Using it couldn’t be simpler:
.star {
margin: 5em auto;
background: tomato;
@include star(10em);
&:hover {
background: deepskyblue;
}
}
Thanks (and congratulations) to Ana Tudor for creating such a shape which made me do some cool Sass stuff.
]]>I think I’ve come pretty close to this point thus I thought it might be a good idea to write a bit about it and give you an inside glance at the whole thing.
Please, consider this post as both a way to introduce some people to Sass and a way to ask Sass experts some advices about the way I handled things. Any comment appreciated. :)
One of the biggest problem one faces when building a stylesheet is the size. Depending on the number of pages, elements and templates on your site, you might end up with a huge stylesheet heavy like hell and not so maintainable.
I think one of the best things when using a CSS preprocessor -whatever is your cup of tea- is you can split your stylesheets into several parts without risking degrading the performances.
This is exactly what I did, spliting my stylesheets into parts. As of writing, I currently have 5 different pieces (5 different .scss files):
_font-awesome.scss
: Font Awesome is the icon font I use in the site_google-fonts.scss
: this is the snippet from Google Web Fonts_prism.scss
: Prism.js is the syntax highlighter_helpers.scss
: this file contains my mixins, variables and helper classes_styles.scss
: the core of the CSSNote: .scss files starting with a _
are not compiled into .css files.
Since my website isn’t that big, I didn’t have to split the code stylesheet into smaller parts like typography, header, footer, modules, etc.
So basically, my central stylesheet (styles.min.scss
compiled into styles.min.css
) looks like this:
@import 'compass/css3/images';
@import 'compass/css3';
@import 'font-awesome', 'google-fonts', 'prism', 'helpers', 'styles';
The first two lines are Compass related imports. It doesn’t compile into the final CSS. They enable use of Compass embedded mixins, sparing me from writing vendor prefixes. The last line imports the 5 files into a single one (top to bottom).
Note: when importing Sass/SCSS files, you don’t need to add underscores or file extensions.
At first I was using the 1140px grid but then it occurred to me I didn’t need a framework as simple as it is to handle a 2-columns layout. I could do it myself and so did I.
My point is: I decided to keep my stylesheet as simple (light) as possible. Thus, I did a huge cleaning in the font-awesome stylesheet. I only kept what was needed: the @font-face call, about ten lines to improve icons position, and the 8 icons I use on the whole site (instead of about 300).
Depending on your project size, you may have various files for that. Maybe one file for variables, one file for mixins, one file for helper classes, and whatever else you like.
My project is fairly (not to say really) small so I gathered everything into a single file. Let’s dig a little bit into it, part by part.
// Mixin providing a PX fallback for REM font-sizes
@mixin font-size($val) {
font-size: ($val * 20) + px;
font-size: $val + rem;
}
// Mixin handling breakpoints for media queries
@mixin breakpoint($point) {
@if $point == mama-bear {
@media (max-width: 48em) {
@content;
}
}
@if $point == baby-bear {
@media (max-width: 38em) {
@content;
}
}
}
Just two. Why having one hundred mixins when you use just two? The first one allows me to use rem
safely for font-size by providing a px
fallback. This is a very nice mixin from Chris Coyier at CSS-tricks.
The second one also comes from CSS-tricks and is a nice way to handle breakpoints for Media Queries within a single MQ declaration. If either I want to change the breakpoints, I don’t have to go through all my stylesheets to find occurrences; all I have to do is edit it in the mixin.
Whenever I want to use a Media Query, I just have to run @include breakpoint(baby-bear) { /* My stuff here */ }
.
*Note: I use em
in media queries in order to prevent some layouts problem when zooming in the browser. More about it in this article from Lyza Gardner.
Ah variables. The most awesome thing in any informatic language in the world. This little piece of thing that spare you from repeating again and again the same things.
Native CSS variables are coming but currently only supported by Chrome so meanwhile we rely on CSS preprocessors for variables. I have to say I really didn’t use much in my project. Actually I used 4, not more.
$pink: #ff3d7f;
$lightgrey: #444;
$mediumgrey: #666;
$darkgrey: #999;
At first I named my variables like $color1
, $color2
, etc but then it occurred to me I was not able to know what variable I had to set in order to have the right color so I switched back to real color names. It feels easier to me this way.
Helpers are classes you can add to any element to have a quick effect without having to give this element any id or specific class, then set styles and all this stuff.
I have quite a few helper classes, some very useful, other a bit less but I use them all in the site. This kind of collection grow up as the project grow so for now it’s kind of small.
Let’s start with the basics:
%clearfix
is an invisible class meant to be extended (@extend) to clear floats in an element containing only floated elements.icon-left
and .icon-right
are used on inline icons to prevent them from sticking the text%clearfix {
&:after {
display: table;
content: '';
clear: both;
}
}
.icon-left {
margin-right: 5px;
}
.iconright {
margin-left: 5px;
}
Then, two helpers to give content specific meaning:
.visually-hidden
simply make the text disappear while keeping it accessible for both screen readers and search engine bots..note
is used to tell a paragraph is a note which could be removed without affecting the sense of the content.visually-hidden {
position: absolute;
overflow: hidden;
clip: rect(0 0 0 0);
height: 1px;
width: 1px;
margin: -1px;
padding: 0;
border: none;
}
.note {
font-style: italic;
padding-left: 1em;
}
And now let’s dig into more interesting stuff. I have built some useful classes to pull images or quotes out of the flow and put them on the side in order to emphasize them. Both are built in the same way:
%pull-quote
and %pull-image
are invisible classes; it means they won’t be compiled in the stylesheet, they are only here to be extended.pull-quote--left
, .pull-quote--right
, .pull-image--left
and .pull-image--right
respectively inherit (@extend
) styles from %pull-quote
and %pull-image
%pull-image {
max-width: 15em;
display: block;
@include breakpoint(baby-bear) {
float: none;
width: 100%;
margin: 1em auto;
}
}
.pull-image--left {
@extend %pull-image;
float: left;
margin: 0 1em 1em 0;
}
.pull-image--right {
@extend %pull-image;
float: right;
margin: 0 0 1em 1em;
}
%pull-quote {
max-width: 250px;
width: 100%;
position: relative;
line-height: 1.35;
font-size: 1.5em;
&:after,
&:before {
font-weight: bold;
}
&:before {
content: '\201c';
}
&:after {
content: '\201d';
}
@include breakpoint(baby-bear) {
float: none;
margin: 1em auto;
border: 5px solid $pink;
border-left: none;
border-right: none;
text-align: center;
padding: 1em 0.5em;
max-width: 100%;
}
}
.pull-quote--left {
@extend %pull-quote;
text-align: right;
float: left;
padding-right: 1em;
margin: 0 1em 0 0;
border-right: 6px solid $pink;
}
.pull-quote--right {
@extend %pull-quote;
text-align: left;
float: right;
padding-left: 1em;
margin: 0 0 0 1em;
border-left: 6px solid $pink;
}
Please note how I nest media queries inside their related selectors. There are two main reasons for this:
Note: if you ever wonder about the double dashes or underscores in class names, it is related to the BEM (Block Element Modifier) approach. More on the topic in this excellent post from Harry Roberts.
Now we’ve seen pretty much everything else than what makes the site what it is, I think it’s time to dig into the main stylesheet. For reading concern I’ll split it into several code snippets here. Plus it will be easier for commenting.
This is not optional, every project needs to use some kind of way to reset CSS styles. Depending on your tastes it might be Eric Meyer’s CSS reset, Normalize CSS or as I like to call it the barbarian CSS as below.
*,
*:before,
*:after {
@include box-sizing(border-box);
padding: 0;
margin: 0;
}
Yes I know, this is dirty. I shouldn’t not reset CSS this way but honestly on small projects like this, it’s really not a big deal. At first I used Normalize CSS but then I realized loading kilobytes of code when 2 lines are enough is not necessary. So barbarian CSS reset people!
Please note I use the simplest box-sizing since IE (all versions) represents less than 1.5% of my traffic.
I didn’t really know how to call this.
html {
font: 20px/1 'HelveticaNeue-Light', 'Helvetica Neue Light', 'Helvetica Neue', 'Helvetica',
'Arial', 'Lucida Grande', sans-serif;
color: #555;
text-shadow: 0 1px rgba(255, 255, 255, 0.6);
border-left: 6px solid $pink;
background-image: url('data:image/png;base64,hErEiSaFuCkInGlOnGdAtAuRiaSaBaCkGrOuNd');
@include breakpoint(baby-bear) {
border-left: none;
border-top: 5px solid $pink;
}
}
a {
color: $pink;
text-decoration: none;
&:hover {
text-decoration: underline;
}
}
Basic stuff here. Font-size, color, font-families, text-shadows and everything that needs to cascade on the whole document are set on the root element (html
). I also give a little custom styles to anchor tags.
This used to be in the 1140px stylesheet but since I don’t use anymore, I moved it back here. It’s all about main wrappers and containers.
.row {
width: 100%;
max-width: 57em;
margin: 0 auto;
padding: 0 1em;
}
.main {
@extend %content;
width: 68%;
margin-right: 2%;
@include breakpoint(mama-bear) {
margin-right: 0;
border-bottom: 3px solid #d1d1d1;
}
}
.sidebar {
@extend %content;
width: 30%;
padding-top: 2em;
}
%content {
padding-bottom: 3em;
float: left;
@include breakpoint(mama-bear) {
float: none;
width: 100%;
}
}
.row
is the main wrapper: it contains the header, the main column (.main
), the sidebar (.sidebar
) and the footer.
.content
is an invisible shared class between both the main column and the sidebar.
I deliberately skipped the rest of the stylesheet since I think it’s not the most interesting part in my opinion. It mostly consists on setting styles for various content elements like paragraphs, lists, tables, images, titles, and so on. Plus, it’s classic CSS, not really SCSS magic.
I think I have covered most of my Sass structure. If you feel like something could be improved or if you have any question, please be sure to drop a comment. :)
]]>The following is a guest post by Ana Tudor. She is passionate about experimenting and learning new things. Also she loves maths and enjoys playing with code.
I had no idea how powerful CSS gradients could be until late 2011, when I found the CSS3 Patterns Gallery made by Lea Verou. The idea that you can obtain many shapes using just gradients was a starting point for many CSS experiments I would later do.
Recently, while browsing through the demos on CodePen, I came across a CSS3 Color Wheel and thought hey, I could do it with just one element and gradients. So I did and the result can be seen here. And now I’m going to explain the reasoning behind it.
The wheel - or you can think of it as a pie - is first split horizontally into two halves and then each half is split into five slices, so there are ten slices in total. Which means that the central angle for each slice is 360°
/10 = 36°
.
The pen below shows graphically how to layer the multiple backgrounds. It also has a pause button so that the infinite animation doesn’t turn into a performance problem.
Check out this Pen!
For both the original pen and this helper demo, the interesting part is this one:
background: linear-gradient(36deg, #272b66 42.34%, transparent 42.34%),
linear-gradient(72deg, #2d559f 75.48%, transparent 75.48%),
linear-gradient(-36deg, #9ac147 42.34%, transparent 42.34%) 100% 0, linear-gradient(
-72deg,
#639b47 75.48%,
transparent 75.48%
) 100% 0,
linear-gradient(36deg, transparent 57.66%, #e1e23b 57.66%) 100% 100%, linear-gradient(
72deg,
transparent 24.52%,
#f7941e 24.52%
) 100% 100%,
linear-gradient(-36deg, transparent 57.66%, #662a6c 57.66%) 0 100%, linear-gradient(
-72deg,
transparent 24.52%,
#9a1d34 24.52%
) 0 100%, #43a1cd linear-gradient(#ba3e2e, #ba3e2e) 50% 100%;
background-repeat: no-repeat;
background-size: 50% 50%;
We first specify the nine gradient backgrounds, their positioning and the background-color
using the shorthand background
syntax.
For anyone who doesn’t remember, the background layers are listed from the top one to the bottom one and the background-color
is specified together with the bottom layer. A background layer includes the following:
<background-image>
<background-position>
/ <background-size>
<background-repeat>
<background-attachment>
<background-origin>
<background-clip>
If the background-position
is not specified, then the background-size
isn’t specified either. Also, since background-origin
and background-clip
both need the same kind of value (that is, a box value like border-box
or content-box
), then, if there is only one such value, that value is given to both background-origin
and background-clip
. Other than that, any value except the one for background-image
can be missing and then it is assumed to be the default.
Since we have nine background layers and we want to have the same non-default values for background-repeat
and background-size
for all of them, we specify these outside the shorthand so that we don’t have to write the same thing nine times.
In the case of background-size
, there is also another reason to do that: Safari doesn’t support background-size
inside the shorthand and, until recently (up to and including version 17), Firefox didn’t support that either. Also, two values should be always given when the background-image
is a gradient, because giving it just one value is going to produce different results in different browsers (unless that one value is 100%, in which case it might as well be missing as that is the default).
The background-color
is set to be a light blue (#43a1cd
) and then, on top of it, there are layered nine non-repeating (background-repeat: no-repeat
for all) background images created using CSS gradients. All nine of them are half the width
and the height
of the element (background-size: 50% 50%
).
The bottom one - horizontally centred (50%
) and at the bottom (100%
) - is really simple. It’s just a gradient from a firebrick red to the same color (linear-gradient(#ba3e2e, #ba3e2e)
), so the result is simply a solid color square.
The other eight are gradients from transparent
to a solid color or from a solid color to transparent
. Four of them look like double slices, having a central angle of 2*36° = 72°
, but half of each such double slice gets covered by another single slice (having a central angle of 36°
).
In order to better understand gradient angles and how the %
values for color stops are computed, let’s see how a linear gradient is defined. Hopefully, this demo that lets you change the gradient angle helps with that - just click the dots.
Check out this Pen!
The gradient angle is the angle - measured clockwise - between the vertical axis and the gradient line (the blue line in the demo). This is for the new syntax, which is not yet supported by WebKit browsers (however, this is going to change). The old syntax measured angles just like on the trigonometric unit circle (counter-clockwise and starting from the horizontal axis).
Note: coming from a mathematical background, I have to say the old way feels more natural to me. However, the new way feels consistent with other CSS features, like rotate transforms, for which the angle values are also clockwise.
What this means is that we (almost always) have different angle values in the standard syntax and in the current WebKit syntax. So, if we are not using something like -prefix-free (which I do almost all the time), then we should to be able to compute one when knowing the other. That is actually pretty simple. They are going in opposite directions, so the formula for one includes the other with a minus sign. Also, there is a 90°
difference between them so this is how we get them:
newSyntax = 90° - oldSyntax;
oldSyntax = 90° - newSyntax;
Note: if no gradient angle or destination side is specified (for example, linear-gradient(lime, yellow)
), then the resulting gradient is going to have a gradient angle of 180°
, not 0°
.
All the points on a line that is perpendicular on the gradient line have the same color. The perpendicular from the corner in the quadrant that’s opposite to the quadrant of the angle is the 0%
line (the crimson line in the demo) and its intersection with the gradient line is the starting point of the gradient (let’s call it S
). The perpendicular from the opposite corner (the one in the same quadrant as the gradient angle) is the 100%
line (the black line in the demo) and its intersection with the gradient line is the ending point of the gradient (let’s call it E
).
In order to compute the %
value of any point P
, we first draw a perpendicular on the gradient line starting from that point. The intersection between the gradient line and this perpendicular is going to be a point we’ll name I
. We now compute the ratio between the lengths of SI
and SE
and the %
value for that point is going to be 100%
times that ratio.
Now let’s see how we apply this for the particular case of the rainbow wheel.
Let’s first consider a gradient that creates a single slice (one with a central angle of 36°
). This is a square image (see below), with a blue slice having an angle of 36°
in the lower part. We draw the horizontal and vertical axes through the point O
at which the diagonals intersect. We draw a perpendicular from that point to the line that separates the dark blue part from the transparent part. This is going to be the gradient line. As it can be seen, there is a 36°
angle between the vertical axis and the gradient line, so the angle of the gradient is 36°
.
We now draw a perpendicular from the corner of the square in the quadrant that is opposite to the one in which the gradient angle is found. This is the 0%
line. Then we draw a perpendicular from the corner of the square in the same quadrant (Q I
) as the gradient angle - this is the 100%
line.
The intersection of the diagonals of a square splits each one of them into two, so AO
and BO
are equal. The BOE
and AOS
angles are equal, as they are vertical angles. Moreover, the BOE
and AOS
triangles are right triangles. All these three mean that the two triangles are also congruent. Which in turn means that SO
and EO
are equal, so the length of SE
is going to be twice the length of EO
or twice the length of SO
.
Note: before moving further, let’s go through a couple of trigonometry concepts first. The longest side of a right-angled triangle is the one opposing that right angle and it’s called the hypotenuse. The other two sides (the ones forming the right angle) are called the catheti of the right triangle. The sine of an acute angle in a right triangle is the ratio between the cathetus opposing that angle and the hypotenuse. The cosine of the same angle is the ratio between the adjacent cathetus and the hypothenuse.
Computing the length of EO
in the right triangle BOE
is really simple. If we take the length of the side of the square to be a
, then the length of the half diagonal BO
is going to be a*sqrt(2)/2
. The BOE
angle is equal to the difference between the BOM
angle, which is 45°
, and the EOM
angle, which is 36°
. This makes BOE
have 9°
. Since BO
is also the hypotenuse in the right triangle BOE
, the length of EO
is going to be (a*sqrt(2)/2)*cos9°
. Which makes the length of SE
be a*sqrt(2)*cos9°
.
We now draw a perpendicular from A
to the PI
line. ASID
is a rectangle, which means that the length of SI
equals the length of AD
. We now consider the rectangular triangle APD
. In this triangle, AP
is the hypotenuse and has a length of a
. This means that AD
is going to have a length of a*sin36°
. But SI
is equal to AD
, so it also has a length of a*sin36°
.
Since we now know both SI
and SE
, we can compute their ratio. It is sin36°/(sqrt(2)*cos9°) = 0.4234
. So the %
value for the color stop is 42.34%
.
In this way, we’ve arrived at: linear-gradient(36deg, #272b66 42.34%, transparent 42.34%)
Computing the %
values for the other background layers is done in the exact same manner.
By now, you’re probably thinking it sucks to do so many computations. And it must be even worse when there are more gradients with different angles…
Even though for creating the rainbow wheel experiment I did compute everything on paper… I can only agree with that! This is why I made a really basic little tool that computes the %
for any point inside the gradient box. You just need to click inside it and the %
value appears in a box at the bottom center.
Check out this Pen!
You can change the dimensions of the gradient box and you can also change the gradient itself. It accepts the newest syntax for linear gradients, with angle values in degrees, to <side>
values or no value at all for describing the direction of the gradient.
CSS gradients are really powerful and understanding how they work can be really useful for creating all sorts of imageless textures or shapes that would be difficult to obtain otherwise.
]]>He started with:
And wanted to end with:
Even if I’m not a flexbox expert, I’m pretty confident saying there is a way to do it very easily. The problem with flexbox is that it’s not fully compatible so we had to look for another option.
Actually Bennett Feely did it very nicely already on CodePen.
I first managed to do it with :nth-child()
selectors, replacing manually each one of the ten elements (JSFiddle). It sucked because it was:
I was very upset not finding any proper way to do it with CSS so I did it with a mix of CSS and JavaScript (in fact jQuery). I don’t know if it’s the best way to do it in JavaScript but here is what I came up with:
$('.myList > li:odd').remove().appendTo('.myList')
Basically I target one out of two items with :nth-child(even)
then remove it from the DOM to finally append it again. This does exactly what was asked so I think it’s a decent solution (JSFiddle).
Finally someone came up with a better idea (and probably a better understanding of CSS) than mine with a pure CSS and very elegant solution (CodePen).
li:nth-child(even) {
margin: 110px 0 0 -110px;
/* Given a 100*100px element with a 10px margin */
}
Wolfcry911 simply used margins to reposition one out of two items. It’s a brilliant solution, really.
However it relies on CSS advanced pseudo-selectors so for a deeper browser support, you might want get back to the JavaScript solution.
I just noticed Estelle Weyl did it in another clever way with CSS columns (CodePen). I’m actually wondering if it’s not the better option all in all since it requires only one single CSS line (prefixes omitted).
ul {
columns: 5;
}
Congratulations to her for such a smart solution. :)
A few days ago, Chris Coyier found Wolfcry911's work and tweeted about it. Someone (in the person of Arash Milani) answered it wasn’t possible to do it with more than 2 rows.
CHALLENGE ACCEPTED! This made me want to give it a shot. Honestly, it took me a few tries and no more than 10 minutes to find a solution for 3 rows.
Check out this Pen!
Instead of doing :nth-child(even)
, we need two different selectors:
li:nth-child(3n + 2) {
margin: 120px 0 0 -110px;
background: limegreen;
}
li:nth-child(3n + 3) {
margin: 230px 0 0 -110px;
background: crimson;
}
So I found a solution to do it with the number of rows we want, pretty cool. Immediately, I thought about automating this. And guess what? I succeeded.
First, I had to move everything to em units in order to make the whole thing easier to customize. I also created a few variables:
$rows: 4;
$baseline: 10px;
$width: 4em;
$height: 4em;
$margin: 0.4em;
A few explanations about the variables:
$rows
stands for the number of rows you want,$baseline
is set as a font-size to the root element (html
) in order to be able to use em everywhere,$width
is the width of each item; in my demo it equals 100px,$height
is the height of each item; in my demo it equals 100px as well,$margin
is the gap between each item; I set it to 10% of the size of an item.Note: you may wonder why using 2 different variables for size when one would be enough. This allows you to use non-square items if you want to: try it, it works.
Now let’s get to the funny part. I figured out there is some kind of pattern to achieve this and to be honest it took me a while (no pun intended) to create the while loop for this, struggling between my comprehension of the problem and Sass syntax errors. Anyway, this is the main idea:
$i: $rows; // Initializing the loop
@while ($i > 1) {
li:nth-child(#{$rows}n + #{$i}) {
$j: ($i - 1); // Setting a $i-1 variable
margin-top: ($j * $height + $i * $margin);
margin-left: -($width + $margin);
}
$i: ($i - 1);
}
It is pretty tough. Let me show you how it compiles when $rows is set to 4 (other variables remain unchanged):
li:nth-child(4n + 4) {
margin-top: 13.6em; // (3 * 4em) + (4 * 0.4em)
margin-left: -4.4em; // -(4em + 0.4em)
}
li:nth-child(4n + 3) {
margin-top: 9.2em; // (2 * 4em) + (3 * 0.4em)
margin-left: -4.4em; // -(4em + 0.4em)
}
li:nth-child(4n + 2) {
margin-top: 4.8em; // (1 * 4em) + (2 * 0.4em)
margin-left: -4.4em; // -(4em + 0.4em)
}
I think the pattern should be easier to see now thanks to the comments. For X rows you’ll have X-1
different selectors starting from :nth-child(Xn+Y)
(where X and Y are the same) until Y becomes stricly superior than 1 (so Y equals 2).
Check out this Pen!
Try changing the number of rows by editing $rows
and see the magic happen.
There are still some problems with this method like: what if items have various sizes? Or what if we want different margins? Or what if we set a disproportionate number of rows given the number of items?
I guess we could complicate the whole thing to accept more parameters and be even more flexible but would it worth it? I guess not. The simple way is to use JavaScript. The funny way is to use Sass.
]]>Well, first of all, it is kind of complicated because I work at 3 different places, which means I have 3 different development environments (5 actually, I have 3 computers at home). There is -well- home, but I also happen to do some stuff at school or at work when I have some time, mostly during the lunch break.
Anyway, I will try to describe what I use to work.
Let’s start with the easy thing: the operating system. First, I use both Mac and Windows. At home, I mostly use my girlfriend’s laptop which is a 4-year old Mac. I also have 2 computers I use(d) for gaming which runs Windows 7 and Windows Vista.
At work, I am on Windows XP. Yeah, that’s not cool, I know. But the whole infrastructure is based on Windows XP, so even developers work stations are using XP. Anyway, I can live with it.
At school we’re on Windows 7. The computers there are pretty cool I must say.
I didn’t try Linux yet but I think I might come to it sooner or later. I like challenge.
Ah, browsers. Our main tools. For the record, not so long ago I swear only by Firefox. But when I started doing a lot of stuff on the web at the same time (running many tabs with somewhat heavy content like videos, WebGL, CSS animations, etc.), it occurred to me Firefox was suffering from a bad memory management. Which wasn’t the case of Chrome.
So I switched to Chrome and never looked back. I even pushed it one step further, using Chrome Canary. This in order to access to a few things Chrome doesn’t support (or didn’t support at the time I switched to Canary) like CSS shaders, exclusions, regions and so on.
At work for something which looks like SSL issue, I am also running Firefox Aurora which is the future version of Firefox, like Canary for Chrome. I don’t dislike Firefox -it’s a wonderful browser- but I clearly prefer Chrome.
I also have Opera and Safari on some computers to make unusual tests. Since I am not a freelance web designer living from the sites I make, I’m not using any browser testing tool like BrowserStack. I would really love a BrowserStack license, but I can’t (or don’t want to) afford a $20/month subscription.
I used to be a huge fan of Notepad++, even if everybody was using Dreamweaver. Honestly I never liked DW; it is super heavy while doing not much than a regular text editor.
Now I am standing on Sublime Text 2 on all my computers, with no intention to change soon. The thing Sublime Text 2 provides that Notepad++ doesn’t is the ability to open a whole folder in order to have access to any file of your project in the arborescence. This is really cool. Plus Sublime Text 2 looks better in my opinion. :)
That being said, I’m carefully looking into Brackets from Adobe which is a web-based IDE looking pretty cool.
Call me old fashioned, I do still use a FTP client. Yes, I know it’s not 2000' anymore but I don’t know how to use FTP from the command line, so I am stuck with a FileZilla. It is actually pretty cool and very easy to use.
However I would like to move forward, thus I am currently learning how to do some FTP stuff through the command line but I’m still not very good at it so for now I keep using my beloved FileZilla.
Well, I am a huge fan of this design in the browser thing, plus I am very sucky when it comes to any designing tool. I mean you, Photoshop. So really, I hardly use Photoshop, unless I am forced to.
However I have the good luck to have an Adobe Creative Suite on most of my development workflows. Work provides official liences, we have student licences at school and I have a student licence at home as well.
You may find this silly but 9 out of 10 times, I use Photoshop to resize and save a screenshot I just took. Yeah… A $3000 software to make screenshots is a bit expensive I guess.
I didn’t know how to call this section because it gathers various tools doing various things I use at various occasions. I hope it’s clear enough. :P
Not so long ago I gave a try to CSS preprocessors, because I am both curious and a CSS lover. It turned out I like CSS preprocessors, they give a lot more options than regular CSS.
Anyway, I am using Sass and Compass on most of my projects now. As an example, this site is built on Sass.
I am running Sass through the command line. Yes, it’s scary. But actually it is really not that hard. I would like to have some sort of application taking care of everything for me like CodeKit, unfortunately I am not always on Mac OS plus CodeKit is not free ($25). If I was always using the same development environment, I would definitely buy CodeKit but sadly I am not.
I know there are CodeKit equivalents for Windows. Most people will tell you about Scout. I tried it yesterday (as I told you I am curious). Guess what: it turns out Scout was messing with my stylesheets, introducing errors in those. My opinion about it? It sucks. Back to command line.
Yaaaaay! Git, my dear friend! Friends, I suck at Git. I understand the main idea, I even know how to do some very basic stuff but every single time I need to do something it takes me about 20 minutes, I have to try every command I know (which is about 6 or 7), fail, get upset, read the doc, don’t understand anything either and finally ask my brother. Long story short, I don’t like Git… yet.
But I still have an account at GitHub which only has 2 repositories as of today (good ones tho!). I hope I’ll push other things in the not so distand future.
When I have to do some server side stuff, mostly PHP (sometimes MySQL), I use EasyPHP when I’m on a Windows machine or Mamp when I’m on Mac.
Well I guess I have covered pretty much everything I thought about. If I missed anything, just tell me and I will edit the post.
What about you people? What’s your development environment?
]]>First of all, this will be a LESS puzzle, so if you’re really unfamiliar with this CSS preprocessor, I think you might feel a bit lost here. Sorry! :(
So the main idea is to enable a Google Web Font using a variable to have only one occurrence of the font name without leaving the stylesheet. Let me explain the requirements a little better:
@import
, and copy the given URL to your clipboard,@my-font: "NameOfMyFont";
,@import url()
using the variable as the font name in the URL,<h1>
would be good) this font.Bonus: make it work with compound font names (such as “Roboto Condensed”).
Accustomed to Sass like me will wonder where is the difficulty in this little exercise. Problem is LESS is extremely annoying when it comes to both url() and string concatenation. I partially covered the topic in this article.
/* Sass version */
$my-font: 'Merriweather';
$url: 'https://fonts.googleapis.com/css?family=#{$my-font}';
@import url($url);
h1 {
font-family: $my-font;
}
I struggled about one hour on this and couldn’t make it work. All my respect to the one who will find the solution.
Good luck!
Loïc Giraudel (secondarily my dear brother) pointed out a thread on GitHub mentioning that what I called a “puzzle” is in fact a real bug reported more than a year ago.
However as of today, there is no fix for this neither is there a workaround. So unless anyone comes up with a solution, this is currently not possible unfortunately.
Plus, the people behind LESS imply fixing this bug would require a large amount of work and deep code restructuration.
No luck.
]]>As you can see, the layout has been updated! It’s now a 2-columns website. There are a number of reasons which made me change it but I think the most important one was that I was sick of seeing this stuff about me on the home page.
Let’s be realistic: the main content is the blog not the 20 lines about me you could see everytime you loaded the first page. I wanted to enhance the articles so now the main page lists available articles. It seems muuuch better to me this way, what do you think?
However, I wanted to provide visitors a quick glance at who I am, so I thought it could be a good idea to have a little sidebar to display informations about me. Now, I’m thinking of adding a picture of me in the sidebar; I know a lot of people do that on their blog. Any thought about that?
Another thing that occured to me is that the lines were too long. It may be silly but when lines are too extented, it makes the reading more difficult. Now the main column is narrower, reading an article is easier and de facto nicer.
I felt like the old layout lacked of responsiveness. It wasn’t bad since it already provided a mobile-friendly version but I wanted a little bit more. This is why I landed on the 1140px CSS grid by Andy Taylor.
I’m particularly happy with this grid system. It is very easy to set up and as you can see it’s pretty darn efficient!
I didn’t change many things design speaking except the left border on the whole page to wedge everything from the left. I guess both the header and the footer are better delimited thanks to the solid borders; it’s probably better this way. Also, what do you think of the new Codrops tag on the home page? Pretty nice, right?
However I slightly improved the mobile version, especially regarding the nav bar. It was a little bit messy with the previous version; it should now be properly centered. I’m thinking about centering the footer on mobiles as well. Don’t know yet.
I now rely on a PHP structure for convenience. Actually, I was kind of sick of having to edit a dozen of files every single time I want to make a tiny little change in the header or the footer. So I now have only PHP files, letting me use include()
.
But, switching all my files to .php means a terrible thing: old URLs won’t work anymore! What about all these tweets, links and poor souls unable to reach my blog posts? No worry. My brother helped me doing some .htaccess in order to allow reaching the blog posts through old URLs. Big thanks to him. :)
While we’re talking about .htaccess: you can now access articles without the file extension like this: https://kittygiraudel/blog. Pretty cool, right?
I also decided to rely on a CDN rather than on self hosting for Font Awesome (now in v3.0.1 since a couple of days). I was especially concerned about the file size of my stylesheet because Font Awesome — as any other icon font — uses a lot of CSS. Anyway, I’m now using Tim Pietrusky’s CDN WeLoveIconFonts and I’m pretty happy with it. ;)
I tried to add a few features in order to make your experience nicer. Nothing big, just a few things which are — according to me — UX improvements. Among those:
I’m kind of psychotic when it comes to performance. I always try to make the page as fast as I can. I’m really pissed off when I’m waiting for a page to load more than 2 seconds, so I tried to do my best to make the loading time as quick as possible.
Among the many things I did on the topic, I:
I don’t know if it’s a sudden realisation or the recent A11y project which motivated me to do that but I took some time to improve accessibility on the site. Plus, it gave me the opportunity to learn some things on the topic.
First of all, I switched a bunch of my divs to “new” HTML5 elements. So I’m now using <header>
, <article>
, <aside>
, <footer>
, <section>
, and so on. I must say it feels right, really.
Secondly, I dug a little into ARIA roles. I have to say I didn’t know it was such a deep and complex topic, so I may have understand a few things wrong. Anyway, I added a role=""
attribute to many elements in the site, especially on the home page.
I also gave a few tries to keyboard navigations and I have to say it’s really not that bad. If you have a few minutes left, try it on the home page and tell me what you think about it.
By the way, if some accessibility ninja is passing by and finds something wrong, please be sure to tell me. :)
SEO, big thing! I decided to push it one step further by trying microdatas. Man, this is not an easy thing. If you’re not familiar with microdatas, the main idea is to label content to describe a specific type of information (person, event, review, etc.). This aims at helping search engine bots understanding the content they index.
Now if you inspect the sidebar code, you might see some microdatas about me including name, job title, nationality, urls, and so on. I believe it will help search engines indexing datas about me. We’ll see if it works.
I also edited the jQuery plugin I use for pagination on the home page because it was using .hide()
to hide content from other pages but the current one. And you’re not without knowing search engines don’t index stuff set to display: none;
.
So I gathered my courage, opened the file and changed those hide and show methods by a class toggling. This class hides things with CSS, letting search engine index the content. It may sound silly but for a JS douche like me, editing a plugin is a pretty big deal. :D
You tell me. If you have any request, comment, advise or any feedback to do, be sure to speak. Thanks a lot.
]]>If you’re a webdesigner or developer, you’ve probably already stumbled upon some wonderful online tools / services. Not necessarly complicated things, just things you definitely need. There are really a bunch of them, and Wild Web Watch is pretty much focused on listing them, but I’d like to focus on just a few of them. The ones I use very often.
So here are the tools I’ll cover in this article:
CSS Coloratum is a handful tool helping you convert colors in different syntaxes. It currently supports keywords, hexadecimal, RGB and HSL. Plus, it shows a preview.
Probably one of the best tools I know, especially when you’re working with hexadecimal colors you want to convert to colors accepting an alpha value (RGBa / HSLa).
WeLoveIconFonts (yes we do!) is some kind of CDN (Content Delivery Network) for icon fonts, like Google Web Fonts for web fonts. It currently supports Brandico, Entypo, Font Awesome, Fontelico, Maki, OpenWeb Icons, Typicons and Zocial.
It’s very easy to use. You pick one or more fonts, you copy the @import line into your stylesheet and you’re done. You can put icons all over your website. No more struggle with font files.
PageSpeed Insights is a tool made by Google which analyzes the content of a web page, then generates suggestions to make things faster. What I really like about PSI is it also exists as a Chrome and a Firefox extension, which means you can inspect your page directly from the WebDeveloper Tools / Firebug. Isn’t that awesome?
ColorZilla provides 2 really awesome things: a CSS gradient generator and a Chrome / Firefox extension to deal with colors. I really recommand the 2, so I’ll talk about both.
Colorzilla Gradient Generator is, well, a CSS gradient generator and probably the best you’ll find so far. It provides a bunch of options like gradient orientation, reversing, size, IE support with filters, color adjustments and much more. And of course, you can copy and paste the CSS code for all browsers. Plus, it also provides 137 presets gradients.
Colorzilla is also a Chrome / Firefox extension to manage colors. This extension provides a lot of features, including:
I know there are a bunch of colorpickers / eyedropper extensions outhere but you want find any like this. Colorzilla is really, really awesome and I wonder how I could work so long without using it.
CanIUse.com is the perfect tool when building HTML5 and CSS3 websites or applications. It groups together compatibility tables for most of HTML5, CSS3, SVG and JS API features. From there, you have access to browser support statistics coming from StatCounters for both desktop and mobiles browsers plus some various notes you may want to know about before using a feature.
This awesome tool has quickly become the reference when it comes to browser support documentation. I use it almost everyday and I would probably be lost without it. As a front-developer, it’s a really really useful tool.
It also exists as a Chrome extension meaning you can search for features directly into your browser without having to visit caniuse.com but I don’t use it much since the search engine isn’t that good ("border-image" doesn’t give any result while “border image” does for example).
There are plenty more tools I’d like to talk about but I think it will be for another article. Enough for one day! What about you people, what are the tools you always use? Be sure to share your opinion!
]]>Why Tetris? Well first, it’s a pretty classic game which doesn’t need a good design layer to be fun to play. Plus, the logic behind the game is simple enough to start, but there are still some difficulties which are very interesting to learn from.
As a reminder:
If you don’t give a fuck about how I built this up and simply want to give it a try, please refer to the last section at the bottom of the article. Have fun! ;)
Disclaimer! Please, understand this game is one of my first works in Unity with C#, so it’s pretty dirty. There are still some bugs, and there have been absolutely no work on the design layer. The point was to try to make a game which is playable. It’s more than enough for a first work, don’t you think?
Before doing any code, I had the “how the fuck am I supposed to do this?” moment. I had to think about the process behind the program, the way it would work. This is what I started with:
Even if I’m clearly not a good C# developer neither I’m a Unity ninja, I was pretty confident with the making of this game. Given the process above, I thought it would be fairly easy to do. Boy, I was wrong. It’s been rough. Let’s see why!
There are a few things that were really not easy to do with my current skills and knowledge of both the language and the program but one was really above all: collision detection. What a bitch.
Basically, I wanted to:
At a first glance, it seems easy, especially when you know Unity engine implements a collision detection system. To put it simple, the collision detection module from Unity permanently checks if something is touching your current item. If there is, it returns the item touching it, else it returns false.
First problem: how to distinguish bottom collisions from side collisions? Because this isn’t the same thing. If you detect a collision on the bottom, you have to make the brick stop moving, and if you detect a collision on a side, you have to prevent the user to move the brick on this side.
I’ve spend hours trying to make it work with no success, so I ended using a completely different approach: rays. To put it simple, you can cast an invisible ray from the center of an item in the direction you want and to the distance you want. From there, it returns you a boolean: there is something or there is not.
So what I did was casting rays in 3 direction (left, bottom, right) at a very short distance. If the ray returns something on a side, I prevent the brick from moving on this side. If there is something below, I stop the brick and instantiate a new one. It seems to work well.
Another problem of mine was placing the bricks correctly. During the first phase of testing, bricks were slightly overlapping each other (see figure). Not much, but enough to be seen and to involve some line destroying issues.
This was caused by the collision detection problem. Because whatever the method I could use (rays, OnCollisionEnter, OnTriggerEnter, …), the brick wasn’t stopped at the right position. It was always “more or less" where it should have ended. This lack of accuracy was problematic.
I ended doing something I didn’t want to, but I had no choice: brick repositioning after landing. Basically, it means when a brick stops moving, I round up its coordinates to place it where it should have ended. It’s not great, it involves some calculations, but I couldn’t think of any other option to fix this issue.
When I started coding, I was expecting the brick rotation to be very simple. In some way, I wasn’t wrong: make the brick rotate on itself is the easy part. When pressing top or bottom arrow, the brick makes a 90° rotation clockwise or counter-clockwise, no problem.
What was much harder however was prevent the rotation when too close to the wall. Rotating a red bar near the wall could mean wedging the bar into the wall. I succeded in preventing the rotation when too close to the wall, however there is still a bug with the purple L brick which cannot rotate when placed at a 1-unit gap from the wall. Sadly, I don’t know how to fix it.
The other big problem I had and still have with brick rotation is rotating near another brick. Unlike the walls, there is no restriction for rotating a brick near another one, meaning you can overlap bricks with this method. In most cases, this bug won’t be noticed because the common behaviour when playing Tetris is rotating and moving the brick while falling, not moving the brick at the very last moment of its fall. Still, there is a bug I couldn’t fix.
Once I figured out how to fix most of the previously seen problems, it was time to do some improvements to make the game enjoyable. This includes increasing difficulty overtime, displaying the next brick, showing the score, having a main screen, allow pause, and so on. Actually, it’s kind of all the features. Making the game work is the hard part, but making the game cool is very important as well.
Showing the score was very easy to do. Basically, everytime a brick is spawned a score variable is incremented by 10 points. When a line is destroyed, the score variable gains 100 points. What was a little harder was displaying the score on the main screen once you lose the game. (This makes me notice I forgot to reset the number of lines when playing again. :x)
On Unity, when you want to do various “levels”, you have to create multiple scenes. Like you would do in Flash if you know what I mean. So in our case, the main menu is a scene, and the game is another one. Problem is, objects in a scene are not accessible from another scene by default, so I had to do some trickery.
To increase the difficulty over time, I had multiple options:
speed = score / 100
or I don’t know. Same reason as above.So I decided to increase speed every time you complete a line. Not much, so you don’t notice it, but progressively, you’re starting to feel it. This option seems great to me because it increases speed only when you’re winning without requiring any other scene breaking the game flow.
From there, displaying some kind of level was only a math concern. The result is you’re passing a level every 10 lines. I think it’s pretty accurate regarding the Classic Tetris.
An interesting point of making a game on Unity is you’re using 3D. You can choose to ignore it, but I feel like it can be a plus to the game when used correctly (which is not the case in this game :D). I wanted to try slowly moving the camera during the game to increase difficulty. Hopefully, I didn’t have to struggle with quaternion calculations for rotation concerns since I managed to do everything directly in the IDE with default animations.
It was pretty easy to do, however I felt like it could be really annoying for some people (including me) to have the camera moving permanently so I simply added an option to enable/disable it. The C
key in game, as in camera. True story.
Disclaimer (once again)! This game is kind of a learning experiment so it may be pretty dirty. As explained above, there are still some bugs and the design layer hasn’t been done at all.
I don’t know yet if I’m going to continue this game, but I think it’s a good start, so I may try to improve it in a not so far future. I have a few plans to make it nicer, including:
My best score is 9840 points, level 7 I guess.
]]>I’ll probably talk a lot about my blog in this article because it’s the latest project I ran, and I really tried to make things right. So I may do some parallelisms as examples. Anyway, let’s go!
The very first thing I do when starting a new web project is thinking. I spend like days boggling my mind about the way I should do this and this, and this… Jumping on the IDE to start coding isn’t a good idea. I used to do this in the past, and it often results in costly mistakes.
So, let’s think about it. And when I think I have a decent idea of what it’s going to look like, I’m sketching some sort of schedule. In most cases, this is as follow:
Things may vary depending on what’s the project, what’s its scale and such but in most cases this is the process I try to follow. I think it’s very important to make things in the right order; the objectives define the content which defines the architecture, which defines the design, which defines the way to code.
Please note this only my way of doing, do not take this as the Holy Bible. And please be sure to tell if you feel like something is wrong here. Okay so now we have some kind of way to go, let’s dig in each part a little deepier, shall we?
This is the very first thing you have to do. Actually, it should even be implicit since you need a goal in order to achieve something. When you eat, it’s because you’re hungry. When you open a new tab on your browser, it’s because you want to make a search or visit a website. If you want to make a website or an application, it’s to suit some needs, to fill a goal.
When I redesigned my website because the old one was just a single page with 5 lines of text on it, I tried to think about what would this site be for. Here is what I came with:
Even if it’s implicit, it’s important to write things down. I often found myself in the past coding something I hadn’t think about at first, which resulted in refactoring large chunks of code. Writing things down from the beginning is a good way to know where you’re going.
Now we have thought about the goal of the site/application, it’s time to think about the content which will fill that goal. There can be no site without content, it’s lame and completely useless. Sometimes I see people asking for a website without anything to put in and I wonder what is the point? Having a place on the internet with your name on it? Okay actually that’s kind of cool, but that’s even better if you have things to tell.
My way to do is to find the maximum of content for each of the objectives of the site/application. Then, if there is too much, I filter to keep only the best ideas. Too much is better than too few. You might want to readnotes from Luke Wroblewski about Jeffrey Zeldman talk at AEA Washington DC about Content first.
For this website, the content was pretty straight forward: the landing page should have some informations about me (like contact stuff). The resume should be, well, my resume. And I had already 2 ideas of articles for the new blog (one about the redesign, and one about CSS preprocessors).
So we managed to find some general content for our website/application. We can now think about the architecture of the whole thing. How will it be divided? In most cases, this step is pretty easy to do. You can think of it as “what my main navigation will look like?”.
Maybe it’ll have a contact page, a products page, a shop section and so on. Then, ask yourself if there will be another level of structure in a section. Shop may contain various other sections. All of this will give you a structure for your site.
This site’s structure is very easy:
Plus, it occured to me doing that helps me creating the development environment during when I come to the coding part.
This is when things are getting fun. Yeah cause the previous steps were pretty boring, right? Who want to spend hours thinking about something? WE WANT TO MAKE THINGS! So here it is. From now on, it’s like doing a bunch of stuff.
But there is still no code. We first have to draw a sketch of the website/application. What will it look like? Physically I mean. This is where I’m facing some troubles as far as I am concerned. If like me you never know how to start, then I figured out asking some very basic questions often help finding out where to start. Things like:
I like starting on paper. I think it’s easier because you can’t hide behind visual effects like gradients, shadows and such. It’s all about structure and where will be positioned the various elements.
When I’m finally happy with a sketch, I try to redo it on my computer. Sometimes I go straight to CSS when things look quite simple, but it occured to me there are better ways to go. I think one of the best would be to use an online tool like WireFrame.cc to make a nicer sketch than the one you drew on paper but still focused on structure, not visual effects.
When it looks good, I personally go for CSS. Some people prefer working a detailed mockup on PhotoShop first but since I’m shitty as hell when it comes to this software and I’m quite confident with CSS, I prefer go straight with CSS and design in the browser. Plus, designing in the browser is very trend those days!
Although I know some designers simply can not design in the browser because they turn their brain to “technical mode”, losing their creative mind. I don’t blame at all since I’m not a designer, browser or not browser. Anyway, if you plan on using Photoshop, I’d suggest not doing too much on visual effects but it’s more like a matter of opinion here. I think this stuff can be figured out afterwards, and not necessarly during the design process but I could be wrong.
Yaaaaay! Here comes the code! The funny part. But before coding, I like to set up my workflow in order to make everything right for the whole development process. Setting up the working environment can mean a lot of things depending on the project:
Here is the way I go.
I really like when things are well ordered plus I don’t want to keep messing with relative paths, so I like to make the whole arborescence at first to not bother about it anymore. Here is the way I do it:
sass/
for SCSS stylesheets if I use Sassstylesheets/
for CSS stylesheetsscripts/
for JavaScript scriptsimages/
for imagesfonts/
for both web and icon fontsfeeds/
for RSSresources/
for various stuffThis site also uses a blog/
folder and a resume/
folder to keep things clean.
A project involving some kind of back-office would also include a fake admin/
folder (with a fake login form) and a real admin folder with some random name.
Depending on the project, the development environment may vary. A PHP project requires EasyPHP or Mamp to work, a Java-based project would require JDK, and so on. Even for the frontend side, you may want to have some tools like pre-processors (Markdown for HTML, Sass for CSS, CoffeeScript for JS or whatever).
Speaking of preprocessors, I build most of my projects on Sass so there are a few things I do before coding. I don’t know about you, but I don’t use any tool to manage Sass stuff (like Compass.app or CodeKit). Not because I’m a command-line nerd but because I work on different computers with different OS, meaning I would have to install everything all over again. So I tend to do Sass stuff with command lines through Ruby Command Prompt.
Depending on what I need for the project, I might also need some scripts like jQuery, Modernizr, Prefixfree, Prism or other things. I may also want to include an icon font I don’t know, so I download files before going any further, just in case.
To speed up the development time, I like to do all the dirty stuff at the beginning of a project to be able to focus on the cool stuff without being interrupted. So I often create a bunch of files I know I will have to do.
The following code is a valid HTML document which can be used to create any page in the site. Its content may vary according to the needs of the project but in most cases it includes conditional HTML classes, IE stylesheet, commented scripts, jQuery call, Google Analytics snippet and so on. Everything’s ready!
<!DOCTYPE html>
<html class="no-js" lang="en">
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
<meta name="viewport" content="width=device-width" />
<title>New title</title>
<link rel="stylesheet" href="stylesheets/styles.min.css" />
<!--[if lte IE 7]>
<link rel="stylesheet" href="stylesheets/ie.css"
/><![endif]-->
<!--[if lt IE 9]>
<script src="https://html5shiv.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->
<!-- <script src="scripts/modernizr-2.6.2.min.js"></script> -->
</head>
<body>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script>
<script>
window.jQuery ||
document.write('<script src="js/jquery-1.8.2.min.js"><\/script>')
</script>
<!-- <script src="js/functions.js"></script> -->
<!--
<script>
var _gaq=[['_setAccount','UA-XXXXX-X'],['_trackPageview']];
(function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0];
g.src=('https:'==location.protocol?'//ssl':'//www')+'.google-analytics.com/ga.js';
s.parentNode.insertBefore(g,s)}(document,'script'));
</script>
-->
</body>
</html>
What’s cool with Sass it’s it gives you the option to use a bunch of stylesheets and concatenate all of them into one single file without any performance loss. The following is my styles.scss
file.
@import 'mixins';
@import 'variables';
@import 'default';
@import 'reset';
@import 'typography';
@import 'grid';
@import 'font-awesome';
@import 'main';
When compiled, it takes all the following SCSS stylesheets and concatenate and compress all of them into one single styles.min.css
file.
_variables.scss
contains things like colors, default margins and such_default.scss
has stuff like border-box, clearfix and useful classes_reset.scss
contains Eric Meyer’s CSS reset_typography.scss
comes from Twitter Bootstrap and includes a bunch of typographic rules_grid.scss
is the 1140px Grid stylesheet_font-awesome.scss
is the FA stylesheet, now replaced by the CDN from Tim Pietrusky WeLoveIconFonts_main.scss
contains stuff relative to the current project, the actual CSS (empty at first)Most projects require at least a little bit of JavaScript. Especially since I’m really not good at JavaScript, meaning I feel like I have to use jQuery in order to achieve anything. Anyway, I use to have a few things in the scripts/ folder to begin with:
jquery-1.8.2.min.js
)modernizr-2.6.2.min.js
)functions.js
where I put my stuff to run various thingsIn the past, I used to use Prefix-free as well for CSS prefixing but I don’t anymore since Compass does it for me.
Anyway, once I’ve done all this stuff, I can go further into the actual development.
I guess I’ve covered pretty much everything I’m dealing with when starting a new project. Some things may vary depending on the project, but here is the general process.
What about you? How do you deal with another project? What’s your workflow?
]]>If you’d like to read about how to build a color scheme for a website then you might be interested in this article Build a color scheme: the fundamentals, or the article Principles of Color and the Color Wheel if you’d like to read about the color wheel.
We will see how we can define colors in style sheets, what each one can be used for and more. But first, let me introduce the topic.
Colors in CSS are defined on a sRGB color space. sRGB stands for “Standard Red Green Blue” where colors are defined through three channels: Red, Green and Blue.
From there, we have various ways to describe color with CSS. Some of them like keywords and hexadecimal has been there almost since the beginning of the web, while other like HSL or RGB came later.
Let’s talk about each one of these definitions to understand them better.
Let me start with the RGB syntax since it’s the most fundamental thing to understand in order to comprehend how other notations like hexadecimal work.
As I said above, RGB stands for Red, Green and Blue. Remember when you were a little kid and were painting with some cheap watercolor? Well, this is kind of the same thing, except that colors behave a little bit differently on screen and on paper. Let me explain myself:
On paper | On screen |
---|---|
Main colors are Red, Yellow and Blue | Main colors are Red, Green and Blue |
Mixing 3 colors makes a brownish black | Mixing 3 colors makes a grey shade |
A bit of blue + some red make nice purple | A bit of blue + some red make dark purple |
The less color you use, the brighter it is | The less color you use, the darker it is |
Representation is a circle with neither white nor black | Representation is a cube with black and white |
This picture is the RGB color model mapped to a cube. What you can see is this: the horizontal x-axis as red values increasing to the left, y-axis as blue increasing to the lower right, and the vertical z-axis as green towards the top. The origin, black, is the vertex hidden from the view.
To describe a color using the RGB model, you have to define a value for the red channel, a value for the green channel and a value for the blue channel. Okay, but what type of value? Percentages? Arbitrary? Any?
A RGB value can be defined using four different syntaxes but only two of them are available in CSS:
So, summarized, we end up with two different ways to display CSS colors with the rgb()
function: percentages and integers between 0 and 255. Let’s illustrate this with an example, shall we?
.black {
/* I’m black! */
color: rgb(0, 0, 0);
color: rgb(0%, 0%, 0%);
}
.white {
/* I’m white! */
color: rgb(255, 255, 255);
color: rgb(100%, 100%, 100%);
}
.purple {
/* I’m medium purple! */
color: rgb(128, 0, 128);
color: rgb(50%, 0%, 50%);
}
.light-purple {
/* I’m fuchsia! */
color: rgb(255, 0, 255);
color: rgb(100%, 0%, 100%);
}
.dark-purple {
/* I’m deep purple! */
color: rgb(64, 0, 64);
color: rgb(25%, 0%, 25%);
}
Important: when using percentages, you have to set the unit even if it is 0. If you don’t, some browsers may be unable to parse it.
As seen previously, while using the RGB system we can also use an alpha channel which is by default set to 1. This channel allows us to modify the opacity of a color, or its transparency if you will.
To use this channel in CSS, you’ll call the rgba()
function instead of the rgb()
. However note the alpha-channel is always defined with a float clamped between 0 and 1.
.black {
/* I’m half transparent black! */
color: rgba(0, 0, 0, 0.5);
color: rgba(0%, 0%, 0%, 0.5);
}
.white {
/* I’m 2/3 transparent white! */
color: rgba(255, 255, 255, 0.33);
color: rgba(100%, 100%, 100%, 0.33);
}
.red {
/* I’m fully transparent red, so kind of invisible */
color: rgba(255, 0, 0, 0);
color: rgba(100%, 0%, 0%, 0);
}
This can be very useful in various situation. Let’s say you have some kind of background image and want to write on it without losing readability or putting a big white box on top of it. This is the perfect usecase for RGBa!
.parent {
background-image: url(‘my-picture.jpg’);
}
.child {
background: rgba(255, 255, 255, 0.75);
color: rgb(51, 51, 51);
}
This way, the child element will have a white background with 75% opacity, showing its parent’s background without risking any issue with readability.
Most of the time, CSS colors are specified using the hexadecimal format which is a 6 characters long string using numbers from 0 to 9 and letters from A to F, starting by the hash sign # (ex: #1A2B3C). We refer as this syntax as a “hex triplet”.
Okay, but what does this mean? I agree it’s not that simple. Basically, hexadecimal colors are some sort of code for RGB colors: the first two characters stand for the red value; the 3rd and 4th characters stand for greens; and the last two characters are here for the blue.
Since the range of a 8-bit byte is 256, we usually use a base 16 system to display values. This system is called hexadecimal. So basically those 3*2 digits stand for 3 values from 0 to 255 converted to base 16, as you would do in RGB.
Okay, I can understand you’re lost here, so we’ll try a little example. Let’s say you want to make a pure red (rgb(255, 0, 0)): thanks to this awesome converter, you convert 255 to base 16 and know it equals FF. If you try to convert 0, you’ll see it’s 0 as well in base 16. So your hex triplet would be #FF0000. Simple, isn’t it?
So this was the theory, alright? It doesn’t mean you have to use a base 16 converter every single time you want to use a color in CSS. I’m simply explaining you how are hexadecimal colors composed. Now in real life, you’ll simply use a color palette like Photoshop or whatever.
Alas, you can’t edit the alpha-channel when defining colors in hexadecimal, it’s actually not possible. If ever you really want to change the opacity, you still can turn your hex triplet to a RGBa quadruplet, or use the opacity property.
Important: however beware of the opacity property. It changes the opacity of the element itself, and all its child elements. Plus, it is not supported by Internet Explorer 6, 7 and 8.
The fact is, hexadecimal is really unfriendly. Nobody knows what color is associated to a hex triplet at the first glance, because it’s a computed syntax by the machine and for the machine.
RGB is slightly better, especially when you’re using percentage values but it’s not wonderful either. If I tell you rgb(54%, 69%, 23%)
, can you tell me what color it will be? Even approximately? I guess not.
That’s why there are keywords. Keywords are real color names like red, green and blue associated to actual RGB / hex triplets. Back in the days, the HTML 4.01 Standard proposed 16 different keywords:
Then the CSS2.1 specification added the orange keyword. Finally, CSS3 came with 130 additional keywords for a total of 147 keywords (134 non-gray, 13 gray).
I won’t list all of them here because it would be too long however, this is a visualization of all of them on a hue wheel by Eric Meyer (see annotated version by Tab Atkins Jr.):
The point of this work is to show keywords are associated to random colors: they are chosen according to their position on the hue wheel.
Eric Meyer also created a color equivalents table in order to know what keyword is associated to which color, with hexadecimal, RGB (both syntax) and HSL versions you can find here.
The point of keywords is to use basic colors with words that actually mean something. I say “basic” because most of the time, you’ll want a custom color who doesn’t have a keyword. But whenever you want to use a plain red or a silver grey, you don’t have to use a hex or RGB triplet; you can use one of the 147 keywords (all perfectly valid, even in old browsers like Internet Explorer 6).
There are two keywords which are a little bit special since they do not refer to a RGB triplet. Those are transparent
and currentColor
.
The transparent value exists since CSS1 but was only valid as a value for the background property. Nowadays transparent
is a valid keyword for any property accepting a color value (color, border-color, background, shadows, gradients, etc.).
Its effect is pretty straight forward: it makes the color (or background-color or whatever) of the element transparent, as it is by default when no color is specified.
What’s the point you say? To restore the default transparent color if a color value you can’t remove is already set.
The currentColor is a CSS3 value allowing you to take the color as a default value for another property. Have a look at the code below.
.my-element {
color: red;
border-color: 5px solid currentColor;
}
The border will be red since the defined color is red. If no color was set, it would have been black, since the default value for the color property is black.
You want to know what’s awesome? currentColor
(case-insensitive) is a default value for a bunch of things. From my tests:
It means you can do one of those and be perfectly valid:
.my-element {
color: red;
border-color: 5px solid; /* This will be red */
box-shadow: 10px 10px 5px; /* This will be red */
text-shadow: 0 2px 1px; /* This will be red */
}
HSL stands for Hue, Saturation and Lightness. Please don’t worry, HSL is not another format of color. It’s only another representation of the RGB model. This cylindric representation aims at showing the RGB model in a more intuitive way than the previous seen cube.
The angle around the central vertical axis of the cylinder corresponds to the “hue”, which is basically the color you want. Take a chromatic wheel: at 0° you have red, at 120° you have green, at 240°you have blue and you go back to red when you reach 360°.
The distance from the central vertical axis of the cylinder corresponds to “saturation” (also called “chroma”). It can be understood as the quantity of black and white you add to your color. When set to 100%, the color is “pure”, but when you reduce the saturation you’re creating a “mixture”, progressively moving your color to some kind of grey.
The distance along the vertical axis corresponds to the “lightness” (also said “value” or “brightness”). To put it simple, the lightness is there to move your color to white or to black. When you’re making a pure color (like red, blue, orange, etc.), you’ll have lightness to 50%. If you want to darken or lighten your color without turning it into an ugly grey, then you’ll change the lightness value.
To describe a color using the HSL representation, you have to define parameters for hue, saturation and lightness. If you don’t know how to start, this is what I recommand:
.white {
/* I’m white! */
color: hsl(0, 0%, 100%);
}
.black {
/* I’m black! */
color: hsl(0, 0%, 0%);
}
.red {
/* I’m red! */
color: hsl(0, 100%, 50%);
}
Note that when wanting black or white, whatever the hue value you set since it’s not on the wheel. It means hsl(0, 0%, 100%)
, hsl(120, 0%, 100%)
and hsl(240, 0%, 100%)
are all 3 white.
As for RGBa, you can set a value for the alpha-channel on a HSL color. It works exactly the same way RGBa does: it accepts a float value between 0 and 1 such as 0.56.
.parent {
background-image: url(‘my-picture.jpg’);
}
.child {
background: hsla(0, 0%, 100%, 0.75);
color: hsl(0, 0%, 30%);
}
You may or may not have heard about System colors. At first, I didn’t want to talk about them because they are deprecated in the CSS3 specification but I thought it could be interesting to drop a few lines just in a matter of curiosity.
System colors are a little bit special since they are not matched to a RGB equivalent, at least not directly. They are keywords associated to a color related to the user’s operating system (Windows XP, Mac OS X, Linux Ubuntu, etc.) like buttonFace
or activeBorder
.
Since the goal of CSS specifications is to standardize things, you understand why they announced System colors as deprecated. Plus, not all operating systems support all the System color keywords; basically it’s a mess.
If you want a complete list of system color keywords, please refer to this documentation on Mozilla Developer Network.
Honestly, this is really up to you. In the end, a RGB triplet is generated, parsed and applied no matter the way you displayed it. The browser parser doesn’t care if you prefer hsl(0, 100%, 50%)
over rgba(255, 0, 0, 1)
.
/* This will be red, whatever you pick */
.red {
color: red;
}
.red {
color: #f00;
}
.red {
color: #ff0000;
}
.red {
color: rgb(255, 0, 0);
}
.red {
color: rgb(100%, 0%, 0%);
}
.red {
color: rgba(255, 0, 0, 1);
}
.red {
color: rgba(100%, 0%, 0%, 1);
}
.red {
color: hsl(0, 100%, 50%);
}
.red {
color: hsla(0, 100%, 50%, 1);
}
Now if you want my way of doing with colors, here is what I do in most cases:
What I think is really cool with HSL however is the fact you can get a shade instead of another color by tweaking the lightness. This is a thing you can’t do with other syntaxes.
CSS preprocessors (at least some of them) provide built-in functions to play with colors. Things like saturate, darken, hue rotation and such. Let me introduce some of them.
lighten(@color, @percentage); /* Makes lighter */
darken(@color, @percentage); /* Makes darker */
saturate(@color, @percentage); /* Makes more saturated*/
desaturate(@color, @percentage); /* Makes less saturated*/
fadein(@color, @percentage); /* Makes more opaque */
fadeout(@color, @percentage); /* Makes more transparent */
fade(@color, @percentage); /* Gives the color 50% opacity */
spin(@color, @degrees); /* Rotates the hue wheel 10° */
mix(@color1, @color2, @percentage); /* Mixes 2 colors with a default weight of 50% */
contrast(@color1, @darkcolor, @lightcolor); /* Returns @darkcolor if the color is >50% luma (i.e. is a light color) otherwise return @lightcolor */
rgba($color, $alpha) /* Convert a hex color into a RGBa one */
red($color) /* Gets the red component */
green($color) /* Gets the green component */
blue($color) /* Gets the blue component */
mix($color-1, $color-2, [$weight]) /* Mixes 2 colors together with a default weight of 50% */
hue($color) /* Gets the hue component */
saturation($color) /* Gets the saturation component */
lightness($color) /* Gets the lightness component */
adjust-hue($color, $degrees) /* Rotates the hue wheel */
lighten($color, $percentage) /* Makes lighter */
darken($color, $percentage) /* Makes darker */
saturate($color, $percentage) /* Makes more saturated */
desaturate($color, $percentage) /* Makes less saturated */
grayscale($color) /* Converts to grayscale */
complement($color) /* Returns the complement */
invert($color) /* Returns the inverse */
alpha($color) /* Gets the alpha component (opacity) */
opacity($color) /* Gets the alpha component (opacity) */
opacify($color, $percentage) /* Makes more opaque */
fade-in($color, $percentage) /* Makes more opaque */
transparentize($color, $percentage) /* Makes more transparent */
fade-out($color, $percentage) /* Makes more transparent */
red(color) /* Gets the red component */
green(color) /* Gets the green component */
blue(color) /* Gets the blue component */
alpha(color) /* Gets the alpha component */
dark(color) /* Makes lighter */
light(color) /* Makes darker */
hue(color) /* Gets the hue component */
saturation(color) /* Gets the saturation component */
lightness(color) /* Gets the lightness component */
As I was documenting myself to write this article, I understood color stuff is very complicated either in optical, in paint or in digital. Those notions of “hex triplet”, “chromatic wheel”, “base 16”, “alpha” are so abstract we can face some difficulties to understand what they mean, what they represent.
Thankfully in CSS we don’t have to use a base 16 converter everytime we want to describe a color. Tools do it for us. But this is a really interesting topic, so I’d recommand you read about it. You’d be surprise how huge it can be!
Anyway, back to CSS, let me (re)introduce you a few awesome tools and resources to help you deal with colors:
Thanks a lot for reading this article. If you have any question or feedback, please be sure to share. Also, if you find any mistake, I’d be glad to correct it. ;)
]]>Since I’m sure you’ll be interested in a little CSS riddle (you will, will you?), let me tell you what this is about.
Will you be able to do this (I’m talking about the small line behind the text) following the restrictions below?
h1
) in the body elementI can’t wait to see the way you’ll figure this out people. I personally found something with a few downsides sadly. I’m sure some of you will be able to find a kick-ass solution. ;)
Good luck!
Thanks for participating! There have been a couple of answers for this trick. Druid of Lûhn proposed something which works but sadly it’s pretty awful for SEO since it involves an empty h1
tag.
Joshua Hibbert used linear gradients to do it (so did Raphael Goetter). This is a clever technique I thought about but didn’t give a try. My experience with gradients is not that good.
Here is the way I did it:
body {
text-align: center;
overflow: hidden;
background: #ffa;
}
h1 {
display: -moz-inline-box;
display: inline-block;
*display: inline;
*zoom: 1;
position: relative;
font-size: 30px;
margin-top: 20px;
}
h1:after,
h1:before {
content: '';
position: absolute;
height: 1px;
width: 1000px;
top: 50%;
right: 100%;
background: black;
}
h1:after {
left: 100%;
}
So basically, I used both pseudo-elements to create the line. To place them, I set the title to inline-block, and the parent (body
) text-align to center.
Sadly, a few things suck with this technique, even if it works pretty well:
text-align: center
overflow: auto
Hopefully the browser support is pretty good, at least way better than the gradient version:
But since it’s only a tiny design improvement, I’ll definitely go with the gradient version on a live project. Thanks for participating. I’ll try to drop another challenge soon. :)
]]>A code playground is an online tool allowing you to do some code, then save and share it. It’s often used for quick demos and reduced testcases. It’s a good alternative to the old .html file with its embedded <style>
and <script>
tags.
Playgrounds are becoming more and more popular and there are a bunch of options when you want to use one. Let me introduce you the most popular ones:
Basically, they all do more or less the same stuff but each one has its own strengths and weaknesses. So in the end the choice is up to the user. I’d like to give you my opinion on this stuff but first, let’s make a little round-up.
Dabblet is an amazing playground, however it doesn’t support JavaScript. That being said, its author presented Dabblet as a pure CSS playground, so it’s not very surprising JavaScript isn’t supported.
What is a little bit more surprising however is that Dabblet doesn’t provide any support for preprocessors, especially CSS ones. Nowadays, it’s a pretty big deal when a playground doesn’t allow users to code with preprocessors.
Plus, it seems to be very buggy sometimes. Shortcuts don’t work as expected, cursor is boucing to the top of your document, etc. It’s too bad because it has a minimalist and cute interface.
JSFiddle is a wonderful playground when it comes to JavaScript development since it provides a wide range of JavaScript libraries, probably more than you’ll ever need. Problem is it doesn’t use a live reload system meaning you have to hit “Run” everytime you make a change. It’s kind of annoying, but for JavaScript prototyping, it’s amazing.
CSSDeck is fairly new in the playground scene but it’s the only one providing the ability to record your code while you type it in order to have some kind of video. Basically, you can make video tutorial with CSSDeck, which you can’t do with other playgrounds.
CodePen is one hell of a playground. It provides very strong tools for each of the 3 supported languages and provides awesome features for registered users like personal gallery, tags, forks, likes and follows, various themes, etc.
Plus, authors pick best pens on the site and feature them on the front page. This way you can have a look at best frontend works outhere without having to search in thousands of pages.
<head>
accessibleHonestly, I think CodePen is far ahead of any other playground out there. All in all, it provides more options than others, it’s more stable, less buggy, and far more popular even if it’s only 6 months old.
I used to work a lot in Dabblet but I’ve always found those tiny little bugs very annoying. Then I switched to JSFiddle but the lack of a live reload was bothering me. Then came CodePen and it was kind of a revelation.
Shortly after the launch, I spent a huge amount of time on CodePen to play with CSS. Back in the days, I did between 1 and 5 pens a day (inspired from Dribbble), most of them hitting the front page. It was very amusing. Now, I’m not doing much anymore because I use my free time for Codrops as part of articles.
Anyway, if you’d like to have a glance behind the scenes of CodePen, David Walsh recently interviewed Chris Coyier about it. They talk about challenges to get there, technical details and of course what’s planned for the future.
I’ve made a comparison of these 4 playgrounds as a table for more clarity. Here is the JSFiddle. Yeah, I made a JSFiddle, because on CodePen everything is public, and I don’t want to drop those kind of things there. It’s probably one of the only bad sides of CodePen, which will be soon gone.
What about you? What’s your favorite CSS playground?
]]>After the launch, it occured to me the design was a bit gloomy so I wanted to add a color to cheer things up. After a dark blue and a creepy green, I ended with the hot pink and a quick survey on Twitter encouraged me to keep it. So pink it is! Hope you like it.
Speaking of survey, another quick one about text align told me to switch to left. It looks like people dislike when text is justified on blogs. I liked it but I’m not the main reader of this blog. :D
I was playing with Sass during the last couple of days and decided it could be cool to build the blog on it, so now it is. Since the site is pretty small, it’s no big deal. Actually I used only very few of the potential of Sass (or whatever other CSS preprocessor):
Anyway, it’s cool.
You may have also noticed I’ve included Prism.js from Lea Verou on the blog as a syntax highlighter for code snippets. I’m pretty happy with it, I think it makes the code easier to read.
Only issue I see right now with Prism.js is it has some issues with processed CSS syntax such as LESS and Sass, but it’s no big deal.
To satisfy a few requests, I agreed on setting up a comment system to allow you to say stuff. Since I decided I won’t do any PHP on the site, I had only a few if not one option. Hopefully Disqus is widely spread all around the world now and honestly I would have never done such a wonder so I’m pretty excited about it.
Depending on how things go I’ll have a closer look into options but for now it’s far better than anything I would have ever hope for so I’m very happy with it. Then please drop a comment if you have anything to tell. ;)
You may or may not have noticed yet but from now on, my articles on Codrops will be featured on the index of the blog. To distinguish them from other articles, they are annotated with [Codrops]. What do you think? Good idea? Bad idea?
I’ve already made a bunch of tiny bug fixes like broken links, inadequate margins, little issues on mobile but some bugs may persist so if you still find one, please tell me: I’ll fix it as soon as possible.
If you have any suggestion to how we could make this place better, please feel free to speak. By the way I’d like to thanks all of you giving feedbacks and helping me improve this place. It means a lot, keep up! :)
]]>So this post will be about my own experience with CSS preprocessors. For the record, I recently wrote an article for Codrops untitled “10 things I learnt about CSS” and I talked a lot about preprocessors, so I’ve read (and tried) a bunch of things on the topic lately.
Anyway and before anything, please note I’m not a hardcore CSS preprocessor user. I’m more like a novice with these tools, but I’ve already worked a little bit on 2 of them: first LESS then Sass. I recently moved from LESS to Sass and don’t plan on going back.
A few weeks ago, I wanted to have a real shot with CSS preprocessors after hours of playing on CodePen so I read a few things to make a choice. To put it simple, there are currently 4 major CSS preprocessors:
I’ve never heard much about Stylus so it was not an option to me. I wanted to have a quick access to complete documentation since I was a little bit scared to take the plunge. And even if CSS Crush sounded really cool because I’m familiar with PHP, I’ve read too little on it to consider this as real choice.
So I had to choose between LESS and Sass like almost everyone else. One thing made the difference in favor of LESS: it could run locally. You see I’m more like a client-side kind of person. I’m really uncomfortable when it comes to server and command lines, so the fact LESS could be compiled with JavaScript on the fly sounded awesome to me. On the other hand, Sass required to install Ruby and run some command lines and it scared me. So LESS it was.
I’ve played with LESS a few days, tried a few things and even built my own framework on it. It was really cool to see CSS pushed to an upper level and I was starting to think I could do all my future projects with LESS. Until I realized LESS client-side compilation is awful performance-wise.
Anyway, that wasn’t the worst thing. I still could learn how to run the server-side part of LESS to compile, or switch to LESSPHP with the help of my brother who uses it at work. No, the worst thing occurred to me when I tried to dig deep down into the entrails of LESS.
One of the first “complicated” thing I tried to create was a mixin handling CSS arrows the same way CSSArrowPlease does. It took me a couple of hours but I finally succeeded. However I noticed something counter-intuitive: conditional statements.
The way I wanted to handle my mixin was something which would look like this:
.mixin(parameters) {
/* Basic stuff here */
if (direction = top) {
/* Conditional stuff here */
}
else if (direction = bottom) {
/* Conditional stuff here */
}
else if (direction = left) {
/* Conditional stuff here */
}
else if (direction = right) {
/* Conditional stuff here */
}
}
The fact is LESS doesn’t handle if / else statements. Instead, it provides guarded mixins (mixin when a parameter exists or equals / is inferior / is superior to something). So basically, I had to do something like this:
.mixin(parameters) {
/*Basic stuff here */
}
.mixin(parameters) when (direction = top) {
/* Conditional stuff here */
}
.mixin(parameters) when (direction = bottom) {
/* Conditional stuff here */
}
.mixin(parameters) when (direction = left) {
/* Conditional stuff here */
}
.mixin(parameters) when (direction = right) {
/* Conditional stuff here */
}
It may look similar at the first glance but it involves a few things:
Anyway, I was just a little frustrated not to be able to use what seems intuitive to me: real if/else conditional statements but all in all I succeeded in doing my mixin so it was not so bad. Things started getting bad when I wanted to do moar.
For a recent Codrops article on pure CSS loading animations, I wanted to include a few things about CSS preprocessors and how they are supposed to be easy to use. Actually, it could have been very very simple if I wasn’t using LESS. One of these things was a loop.
Loops are cool: they can handle a huge amount of operations in only a few lines and even if you don’t need them everyday in CSS, it’s cool to have the option to use them. I wanted a loop to set the appropriate animation name on a dozen of elements. This is more or less what I was expecting:
@nbElements: 10;
for(@i = 0; @i < @nbElements; @i++) {
.my-element:nth-child(@i) {
animation-name: loading- @i;
}
}
Well, this is absolutely not how LESS is handling loops. Actually LESS doesn’t handle loops; you have to use a recursive function (a function calling itself) in order to reproduce the desired behaviour. This is what I ended up with:
/* Define loop */
.loop(@index) when (@index > 0) {
(~'.my-element:nth-child(@{index})') {
animation-name: 'loading-@{index}';
}
/* Call itself */
.loop(@index - 1);
}
/* Stop loop */
.loop (0) {
}
/* Use loop */
@nbElements: 10;
.loop (@nbElements);
In what universe is this more user-friendly and intuitive than a classic for loop? Is there anyone here who would have thought about this at first? I started thinking LESS was not as perfect as I first thought but sadly, that was still not the worst part.
Things went very ugly when I wanted to manage @keyframes inside this for loop. Yeah, I know: I like challenge.
I know concatenation can be somewhat annoying to handle depending on the language, but I was far from thinking LESS was so bad on this topic. First thing I discovered: you can’t use/concatenate a variable as a selector without a work-around and you absolutely can’t use a variable as a property name in LESS (at least as far as I can tell). Only as a value.
/* This works */
.my-element {
color: @my-value;
}
/* This doesn’t work */
@my-element {
color: @my-value;
}
/* This doesn’t work either */
@{my-element} {
color: @my-value;
}
/* But this works */
(~'@{my-element}') {
color: @my-value;
}
/* And this can’t work */
.my-element {
@my-property: @my-value;
@{my-property}: @my-value;
(~"@{my-property}"): @my-value;
}
Two very annoying things there: we definitely can’t use variables as property names and the concatenation syntax is ugly as hell. (~"@{variable}")
, really? But actually if you want my opinion, the biggest mistake they made is to name variables with the at sign @.
It is somewhat well thought out since CSS is using this sign for “alternative stuff” like media queries (@media
), animation keyframes (@keyframes
) and probably other things in the future (@page
for example). I got the reasoning and I admire the will of sticking to the regular CSS syntax.
But come on… How come they didn’t think about variable concatenations and @keyframes
/@page
uses inside mixins?
Basically, LESS fails to understand @page
and @keyframes
inside mixins because it throws an exception according to its source code. So you’ll need two nested mixins: one handling your animation, the second one to handle the keyframes. Sounds heavy and complicated, well it is. So let’s say you want to create a custom mixin using @keyframes
and vendor prefixes (not much, right?) this is what you have to do:
@newline: ` '\n'`; /* Newline */
.my-mixin(@selector, @name, @other-parameters) {
/* @selector is the element using your animation
* @name is the name of your animation
* @other-parameters are the parameters of your animation
*/
.keyframe-mixin(@pre, @post, @vendor) {
/* @pre is the newline hack (empty on the first declaration)
* @post is a variable fix to detect the last declaration (1 on the last one)
* @vendor is the vendor prefix you want
*/
(~'@{pre}@@{vendor}keyframes @{name} {@{newline} 0%') {
/* 0% stuff here */
}
100% {
/* 100% stuff here */
}
.Local() {
}
.Local() when (@post=1) {
(~'}@{newline}@{selector}') {
-webkit-animation: @name;
-moz-animation: @name;
-ms-animation: @name;
-o-animation: @name;
animation: @name;
}
}
.Local;
}
.keyframe-mixin('', 0, '-webkit-');
.keyframe-mixin(~'}@{newline}', 0, '-moz-');
.keyframe-mixin(~'}@{newline}', 0, '-ms-');
.keyframe-mixin(~'}@{newline}', 0, '-o-');
.keyframe-mixin(~'}@{newline}', 1, '');
}
.my-mixin('#whatever', name, other-parameters);
Yeah, this is a complete nightmare. I’m not the one who wrote this; I’ve been searching for hours how to do this before finding a very complete answer on StackOverflow leading to two others related topic with wonderful answers (here and there).
Note: the .Local()
thing seems to be a keyword for “this” but I couldn’t find any confirmation on this. If you have, please catch me on Twitter.
So basically, here is what there is to say (still not from me):
(~"@keyframes @{name}{") { … }
renders as @keyframes name { { … }
{ {
, it requires a newline which cannot be escaped directly so through the variable @newline: \
"\n"``;. LESS parses anything between backticks as JavaScript, so the resulting value is a newline character.{ … }
requires a selector to be valid, we use the first step of the animation (0%).(~"} dummy") { .. }
. How ugly is that?(~"@{pre} @@{vendor}keyframes @{name} {@{newline}0%")
. What a nightmare…@{pre}
has to be "}@{newline}"
for every keyframes block after the first one.Anyway, this was waaaaay too much for me. The point of CSS preprocessors is to easy the CSS development, not to make it harder. So this is the moment I realized LESS wasn’t that good.
I won’t make a complete and detailed comparison between Sass and LESS because some people did it very well already (Chris Coyier, Kewin Powell, etc.). I’ll only cover the few points I talked about earlier.
@mixin my-mixin($parameters) {
/* Basic stuff here */
@if ($my-parameter == 'value') {
/* Conditional stuff here */
}
}
This is the Sass syntax for conditional statements in a mixin. Okay, it may lack of some brackets but it’s way easier than the LESS syntax in my opinion.
@for $i from 1 through 10 {
/* My stuff here */
}
Once again, it may lack of a few brackets but we still understand very well how it works. It’s almost plain language: "for variable i from 1 through 10, do this". It looks very intuitive to me.
Sass has absolutely no problem with concatenation neither in selectors nor in property names. You only have to do this #{$my-variable}
to make things work.
#{$my-selector} {
#{$my-property}: $my-value;
}
Very quickly, here are the few things making me tell Sass is better than LESS. Those are well explained in the above links.
@extend
feature allowing you to extend a class from another oneWell, I’ve been moaning about LESS the whole article, but honestly this is not so bad. At least, it’s no so bad if you don’t plan on complicated and advanced things. Actually there are things LESS are better at, let me tell you my opinion about it:
Whatsoever, the choice is really up to you. All of this was only my opinion based on my experience. LESS is still a good CSS preprocessor, but in the end I think Sass is simply better.
]]>Actually, this is the very recent redesign of daverupert.com (nice job by the way) who led me to do such a task. You see, for months (almost years!) and I don’t really know why, I thought I had to manage a CMS like WordPress or whatever to handle a blog. But why bother? Simple HTML/CSS pushing on a server and this is done, right?
This is a good question since I’m not planning on heavy blogging. You may know I’m a writer for Codrops and hell I’m proud of it. Anyway, it takes me a good amount of time, so I won’t be able to post things everyday here.
However, I really wanted a place on the internet to talk about some things I can’t write about at Codrops. I’m still talking about web, don’t worry. Things like tools, personal experiences, stuff about my work, my side projects, or whatever.
Actually this is very simple for now, but I’m planning on a few things to improve my workflow in a not so far future. As of today, I’m writing my stuff on Sublime Text in a template file, and then I push them to the server with a FTP client.
There is no JavaScript (nor jQuery), no plugin, no PHP and no database. Only a tiny little stylesheet. Which means the site is fast, and it matters a lot to me, especially when it comes to mobile browsing.
Anyway, I’d like to be able to manage things a little better. For example, I’d like to write my articles in Markdown instead of regular HTML. Also, depending on whether I post a lot of code snippets or not I may want to add Prism.js for the syntax highlighting.
I’m currently learning about Jekyll to be able to get rid of the FTP client and manage everything on an upper level, but it will take some time before I make the switch.
I know this may look a little flat but I wanted it to be. It’s minimalist to focus on the content and nothing else. No fancy buttons, no complex multi-columns layout, no heavy CSS transitions, etc. It’s a blog made with the content, for the content. This is why the font-size is huge, the line-height is important, there is a lot of space, and so on.
But I may improve the design over the time of course. :)
Honestly I don’t know. Blogging when I have time and things to say. Maybe a contact page or something.
For now, I’m not planning on adding a comments system because I don’t think there is much need for it. Most people won’t take the time to comment anyway, and those who would do can still catch me on Twitter. But if one day I feel like I should allow users to comment, I may think about Disqus. Mainly because I don’t want to spend hours doing PHP stuff on this.
Anyway, if you find any bug or have any suggestion, please catch me on Twitter.
]]>