辛宝Otto

辛宝Otto 的玄酒清谈

北漂前端程序员儿 / 探索新事物 / Web Worker 主播之一/内向话痨
xiaoyuzhou
email

Speedpass - Excavation Power Plan 26th Edition: Practice and Future of Node.js

image

Juejin Meetup 26 Node.js Review

Three presentations

1/3 Discussion on SSR and Worker#

Speaker: Liang Wei, ByteDance-NodeJs Infra-Development Engineer
Five years of frontend work experience, previously worked at Meituan and ByteDance. Currently working on basic technology development at ByteDance.

Introduction to SSR basic concepts, distinguishing from CSR concepts.

Benefits of SSR

  • Better SEO
  • More focus on fp/fcp time.
  • Unified language
  • Optional introduction

Scenarios where SSR is not used

  • To improve TTFB - time to first byte
  • Earlier interaction TTI - time to interactive
  • Heavy UI interaction
  • Complex user authentication

Runtime support for Node.js, Deno, cf workers, etc.

Worker#

Introducing service worker can introduce fetch, which is equivalent to building requests. From this perspective, the introduction of the winterCG organization, which includes different runtimes. Compatible with web API standards, a set of code can run on different runtimes.

Composite standard runtime.

Advantages of worker

  • Web standard runtime.
  • Based on the v8 development standard runtime, it can be independent and controllable, from API to runtime.
  • Develop private APIs
  • Lightweight, low server load, start 3-4 instances with pm2, if using your own worker, hundreds of processes can be run.
  • Fast deployment, for example, ByteDance internally from node -> worker, deployment time reduced by 90%
  • Universal solution, can be isomorphic, using the same set of tools, development process

Worker issues

  • It is not possible to fully compatible with the node environment, there is no fs net, etc., such as pupeteer node-gyp, etc.
  • Only fetch triggers requests, cannot do rpc and timers, cannot do tcp databases
  • The id of the link tracking is not standardized, which can easily cause confusion

There are many compatibility issues.

Extended technical sharing "ByteDance Serverless High-density Deployment and Web-interoperable Runtime Practice"

Practical implementation of workers, services that generate qrcode, quickly launch qrcode. Because of the high-density deployment system of serverless, there is no need to worry about scalability and performance specifications.

Features: Lightweight, fast deployment.

Implementation scenarios:

  • Workers can be used as a workflow engine, embedding nodes for code execution. For example, langchain is used to process input and output.
  • Calculation of advertising rules. Cost
  • Product strategy selection. The main focus is on speed, lightweight, and easy deployment.
  • For example, companion services for some core services, should not burden the original services
  • Products of automated production, such as configuration interfaces, derive interface services

Combination of SSR and worker

  • SSR requires a js runtime, which is what worker is
  • SSR web standard
  • SSR simple logic

Examples of use cases are ignored.

SSR is placed on the worker, and the worker is not familiar with the edge, CDN, etc.

Since SSR is placed on the worker, it can be run in different scenarios:

  • nsr-native side render mobile rendering, predict the rendering of the next page, and achieve instant opening on mobile devices
  • Streaming rendering, to solve TTFB, show part of the content first
  • Edge deployment, can optimize TTFB, but there will be other problems
  • CSR fallback, error fallback
  • Can cache settings or cdn, for example

QA section#

  • Combination of SSR and auth, can it be implemented to integrate with user permissions? The answer depends on the choice, be careful with caching
  • SSR has SSR+csr parts, how to distinguish the runtime of some logic, such as window object, location object
    • The server-side does not have a window, either introduce a polyfill to prevent window from throwing an error, or judge whether csr is more appropriate
    • How to distinguish whether it is an SSR or csr request? This may require marking the link tracking

Extension#

Roll your own javascript runtime

2/3 User Experience Digitalization and Efficiency Improvement#

Speaker: Ren Longfei, Qunar Hotel Frontend Manager

Outline

  • User experience digitalization
  • Frontend digitalization efficiency improvement
  • Summary and outlook

User Experience Digitalization#

Business background and analysis
Refinement of user operation scenarios. User experience, development experience.

How to measure user experience? Different indicators, here is a graph. The colored parts.

image.png

Refinement direction.

image.png

Distance technology perspective

The previous fragmented tools and indicators are now being digitized as a whole.

Indicators such as fluency tti/fcp/lcp, stability crash exception killRate, energy consumption energy cpu overheat, etc.

Capability part, abstracted, apm, reports, etc.

To achieve overall digitalization, it is nothing more than defining indicators, monitoring, and building platforms. The guest provided a diagram for detailed display.

image.png

This part is unfamiliar to me, specifically how to optimize the experience, first evaluate the indicators, align the cognition of all departments, and summarize feasible solutions. After a specific department has optimized part of the experience, abstract it and promote it to let the basic department make plans.

Then test data throughout the entire lifecycle and do special optimization.

However, frontend optimization has bottlenecks, and it still comes down to the data acquisition stage. How to optimize, still predict, pre-search, so that users can go to the next page faster, trading space for time.

It becomes a pre-request design. Still, through different strategies, calculate what behavior triggers the pre-request.

Technical benefits: Predictive request hit rate 31%, TTI reduced by 60%+

Trigger timing, such as first screen trigger, browsing trigger, jump trigger.

With calculations, monitoring can be done to see trends, find problems, split indicators, and do special optimization.

Competitive analysis, try to automate testing. For example, prepare mobile screenshots, code injection, OCR recognition, etc., or implement it in automated means, combined with data.

image.png

This is for digital comparison with peers in the industry.

Regularly compare with competitors. Define indicators, measure data, summarize analysis.

Frontend Digitalization Efficiency Improvement#

Responsible for business, involving more content of online services, how to optimize frontend efficiency?

image.png

Configuration of low-code platforms. BFF is also used for serverless online services.

Here, how to self-develop Serverless platform, the picture doesn't show much, you can come back and listen later if needed.

From local to cloud development. Also introduced the system product. System design diagram. There has always been traffic from toC containers.

Single pod, multiple functions, considering security, business isolation.

Summary#

image.png

QA section

  • How to handle stability and circuit breaking for single pod? For example, if a single function pod is full, how to set the number of replicas, very professional
  • The system is an internal platform and will not be open-sourced
  • Using AWS resources, using NestJS, there are bottlenecks between function calls, how to handle
    • All functions can be called, they can all be invoked or new instances can be created
    • AWS uses IAM roles to distinguish resources, how to control function resource permissions? Currently, it is default open
  • Function custom runtime, only node runtime has been done
  • Visual editing, has SDK orchestration been done, how to do it, editing entities, running. Can multiple service calls be orchestrated? Implementation difficulty is relatively high

3/3 Some Opportunities Node.js V20 Brings in the AI Era#

Speaker: The Wolf!

  • Node version changes
  • Introduction to Node v20
  • New opportunities in the AI era
  • Looking at full-stack again

Node version changes#

2017 Node v8 - 2023 Node v21

v20 has many stable features.

Node v20#

Some popular science and practice.

From commonjs -> esm by default, it used to be confusing, now it is unified, in v21 it is default and simplified.

The community actively promotes the landing of esm, only supports esm and does not support cjs, extending to the esm only rendering published by sindresorhus. It can be seen as an opportunity.

  • ESM service conversion:
    • For example, traditional React projects, analyze and specialize the dependencies on esm.sh.
    • When encountering ts resources, they can be transpiled into esm and run locally directly.
    • By customizing the render, write the tag content into html, and achieve browser development, handle page rendering
    • In the case of low code, it can be implemented without using cli
    • Represented by vite, switch to Rust, to possibly not need cli, esm direction without cli may be a trend
    • Node can run web containers

Looking forward to the future

node built-in syntax. Node v20 can already achieve from remote like deno, it just hasn't been included in the latest versions yet.

Asynchronous flow control. error first/promise/async-await three methods. Generator/blue brid and others in history are relatively rare, and the mainstream has accepted the Promise and async/await solutions.

Testing direction xv, write a node assert test statement, run with xv, very lightweight.

Testing framework built-in node:test, previously used frameworks and assertion libraries, now no longer needed. Just node --test --test-reporter spec --watch ./*..test.js*, node + node

TS as a first-class citizen, deno/bun both have built-in support. Knowledge advancement, equal type gymnastics tsc/tsd/tsx/tsup/tsdoc, etc., ioc decorators, design patterns, oop, etc.

From mongo to primay/drizzle, introducing types. Proxy/Reflex, transaction processing is still more troublesome, there are still fewer out-of-the-box solutions.

OOP in the future will not differ much from Java.

Now it is not easy to write a node module. If you use deno/bun, you need to simplify it.

image.png
Comparison with deno bun. Advantages and disadvantages.

Node focuses more on compatibility and security, not just pursuing faster. Node's performance is sufficient. From Matteo Collina's "why is bun faster than node.js"

Lightweight runtime. For example, noslate workers, wintercg has a good implementation of workers, scenarios faas + er

Thread worker/vm/vm2 solutions are many.

Knowing what node is good at, and side effects. Do less supplementation.

What node v8 added to v20.

New Opportunities in the AI Era#

Integration and ecology.

From prd recognition of UI to code, there is a lot of computation at each step. Quite painful.

Understanding AI-assisted code writing, more expert + AI

AI SDK support, all have support for node, faster with multiple applications. Applying node + AI has benefits.

The following figure shows the AI technology solution

image.png

Quickly assemble and launch with node, node + AI has advantages.

Documate + node, Shengchang AI document, aricode open source. Look at his dependencies, orama vector database, simple code implementation, node is very suitable for implementing functions.

Integration, trpc for front-end and back-end calls with types @trpc/client. Mentioned aircode

Using httpc.dev for calls. Similar to assembling parameters on the SDK, go to invoke the corresponding function.

Node - faas becomes lighter, hiding the knowledge of http through the SDK, reducing the difficulty

Looking at full-stack again#

Is frontend dead? Outsourcing, server-side replacement phenomena.

Next.js directly calls use server, feels like going back to the era of PHP. There is no major innovation in the frontend. The cycle of frontend and backend, full-stack.

Talk about the changes of Rails DHH

Low code, the total amount of work remains the same, how to reduce workload, reduce intermediate links. Talk about retool.com, low code, SQL, automatic UI.

Write volume 4, the next decade,

QA

  • CPU-intensive computing. Thread-worker architecture splits napi calls to Rust, capability splitting is the most effective, such as letting golang
  • Node built-in ts possible? Yes.
  • JS+TS, optimize based on types. Yes, the current ecosystem relies on ts type inference.

Outlook after listening to the three presentations#

The three speakers are indeed impressive, and there are many things I don't understand. They are practical experiences. It's just that the density is a bit high.

After listening to the first one, I finally have a better understanding of workers, and I also understand the limitations of products such as workers/puppeteer/d1 on Cloudflare. It turns out that being a backend doesn't necessarily have to be node.

The second one is a bit difficult, the level is high, and there are many detailed contents. I haven't practiced it, so I can understand it but can't digest it.

The third one, I looked at the changes in node again, and the density is quite high.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.