![]() ![]() Its adoption has been really steady as its close to the 15k stars at the moment on Github. So if you are running Yarn, do considering trying them out! Chances are they will greatly improve your install times. It is also slightly faster with a cache, yielding 45 seconds: We tried them out to make it a fairer mentions the 3 following options: nmMode: hardlinks-global enableGlobalCache: true compressionLevel: 0Īnd they do help a lot, especially without cache where we go down to 1 minute 10 seconds: But as pointed out by there are also options you can set to speedup Yarn. Now, that’s where we decided to try out pnpm. It’s fair to say we’ve sped up our builds quite a lot already, but we wanted to go further yet. So just for the sake of the comparison and because our original point was to speedup install times, I went ahead and completely removed the yarn.lock file and tested again.Īnd without cache, we got down to 4 minutes and 1 second: We’ve also been adding so many packages in different workspaces that we ended up with a lot of duplicated dependencies, ie with multiple versions of the same packages. I’m sparing you the screenshots for a good reason - I did mention we have been using Yarn 2 in that monorepo for a while. ![]() I’ll spare you another pair of screenshots, but that got us down a bit, to 4 minutes 50 seconds without cache and 57 seconds with cache. yarn/releases/yarn-3.1.1.cjs ➤ YN0000: Done in 0s 758msĪnd there we go, we were upgraded to Yarn 3. Upgrading to Yarn 3 is fairly simple: > yarn set version berry ➤ YN0000: Retrieving ➤ YN0000: Saving the new release in. We stuck and are sticking to node_modules for now.Īnyway, because Yarn 3 had been released for a few months with performances improvements, we decided to give it try to see if that would speedup our builds. It’s a really interesting idea but dropping node_modules is a compatibility challenge which kept us away from trying it. To make it short, it completely forfeits the idea of the node_modules resolution mechanism and tells Node to depend on Yarn for dependency resolution. You may know Yarn 2 for introducing the Plug’n’Play linker. It’s easy to understand how many dependencies we’re bound to have when you see all the frameworks we support, whether on v or on Backlight. Because we use a lot of different dev tools (in no particular order - Vite, Vitepress, Astro, esbuild, Webpack, Eleventy, Firebase tools, Tailwind…) and many more actual dependencies. We have been using Yarn 2 for quite some time, having originally switched to it for its native workspace support which is great for monorepos as we happen to have one. That’s when we really thought we needed to do something. As soon as the lock file changes, typically when adding dependencies, the cache isn’t reused because the cache’s hash is usually computed based on the lock file. An easy way to do that nowadays is to use actions/node-setup’s caching capabilities. You may know that package managers have caches, and a known trick to speed installs is to save that cache after CI runs, so it can be reused for subsequent runs. Obviously, the job has to do more afterwards, and that’s where I’d prefer it to spend time.īut wait, there’s more. Yet… This is what we would see in the best case scenario on that bare minimum linting & type checking job:ġ minute and 11 seconds just to install dependencies. When you want commits & reviews to be fast, CI cannot be the one thing holding you back. Now, having that bare minimum is great, but it also needs to be as fast as possible. I’m a strong believer in having proper CI - the threshold for how much to invest in unit & integration tests is always tricky to set, but to me the bare minimum should be to have linting and type checking run on every commit. Name Remy Twitter all started with me trying to improve our Continuous Integration pipeline. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |