This Friday is all about the rise of TypeScript: AI influence has made it the most used language on GitHub. The roundup of why startups still choose React also covers Angular, Vue and Svelte, with the latter keeping the satisfaction crown. Dig into a detailed look at source maps internals, and HTMX leaping to fetch() with a major breaking change in v4.0. For the CSS crowd, check out high-performance syntax highlighting, container queries, and fun with line-clamp and view transitions. The week brings mixed headlines too: the ever-confusing UK/EU cookie law news, clever accessibility tips, and technology that turns brain scans into rough thought captions. And for pure fun, Friday’s Scope Creep game. Happy Reading!
Loads of data, but most interestingly, it appears that AI has pushed Typescript into becoming the most used language on GitHub.
and this one also is interesting
Isn’t it a bit strange that you need to use third-party libraries to improve performance for the library? Maybe. React gives you freedom to use Context, Redux, Jotai, Zustand or whatever else, but I would prefer an out-of-the-box solution.
Actually, the post reviews React, Vue, Svelte and Angular. There are some interesting findings, and my favourite is that Svelte maintains the highest satisfaction, which I agree with.
https://evilmartians.com/chronicles/why-startups-choose-react-and-when-you-should-not
That is a great deep dive into how source maps are structured, encoded, decoded and how they work under the hood.
https://www.polarsignals.com/blog/posts/2025/11/04/javascript-source-maps-internals
The owner promised that there would never be HTMX 3.0, and he kept this promise. So, there is HTMX 4.0, and the main change is that it has moved from XMLHttpRequest to fetch(). Also, this is a breaking change, and they promise to keep v2.0 updated for years while offering v4.0 as the next version.
https://htmx.org/essays/the-fetchening/
https://motion.dev/blog/web-animation-performance-tier-list
“The CSS Custom Highlight API provides a way to style arbitrary text ranges without modifying the DOM structure.”
https://pavi2410.com/blog/high-performance-syntax-highlighting-with-css-highlights-api/
https://alfy.blog/2025/10/31/your-url-is-your-state.html
There is a long list of information they know, for example, user preferences, device size, orientation, and the number of items in the container, among others.
https://nerdy.dev/components-can-know
Interesting example of how to use Container queries units.
https://ryanmulligan.dev/blog/transition-to-the-other-side/
Can we trust browsers with accessibility, and why do we use outline:0 ?
https://medienbaecker.com/articles/trusting-the-browser
Practical example with table sorting and form filling with steps.
https://piccalil.li/blog/some-practical-examples-of-view-transitions-to-elevate-your-ui/
We all know the old trick with overflow:hidden; white-space:nowrap; text-overflow: ellipsis , but there is also a multiline clamp available.
1 | |
2 | |
3 | |
4 | |
5 | |
6 | |
7 | |
8 | |
9 | |
10 | |
https://blog.logrocket.com/css-line-clamp
https://frontendmasters.com/blog/perfectly-pointed-tooltips-a-foundation/
Right, this is more than nothing if the idea of the proposal will be in discussion by the end of 2025, then by the end of 2026, maybe there will be changes in the Digital Fairness Act and then probably another 2-3 years to accept and a few years to implement the solution in browsers. There is a hope. Somewhere.
The best way to truly grasp agents (and their limits or possibilities) is to build one yourself. Whether you're a fan or a sceptic, hands-on experience is critical to making real judgments about the tech.
https://fly.io/blog/everyone-write-an-agent/
In short, technology can’t read the thoughts of random people. However, using AI and brain scans, it can caption human thought. This technology could create communication tools for patients with paralysis, or perhaps it could even develop new interfaces that allow us to use our minds to control physical devices literally. Feels like living in a Sci-Fi book.
https://www.extremetech.com/science/researchers-develop-mind-captioning-using-brain-scans-and-ai