9 of the best JavaScript frameworks

The best JavaScript frameworks make coding faster and easier, so you can focus on designing the perfect website layout – instead of becoming bogged down in code. A number of great ones have popped up on the market in recent years. 

In this article, we'll take a look at the biggest and best JavaScript frameworks around, and explore how to get the best out of them for your next projects. We'll look at Vue.js, React, AngularJS, Polymer and Aurelia, then we'll also explore some newer or lesser-known options you might want to look at. You can use the drop-down menu above to jump to the framework you want to explore first.  

Most of these frameworks are open source projects, too, so you can dig in and see how they work – or even contribute yourself.

Vue.js

Best for:

  • Beginners
  • Lightweight applications with a small footprint

Vue.js is a progressive JavaScript framework for building user interfaces. An open source project (see the GitHub repo here), its ideal for beginners. The main library is focused on the view layer and all templates are valid HTML, making it easy to pick up. In the following two mini-tutorials, we'll walk through how to use Vue to manage multiple data stores, and speed up the first load to improve your site's performance.

01. Manage state with Vue

As with any component-based library, managing state in Vue can be tricky. While the application is small, it’s possible to keep things in sync by emitting events when values change. However, this can become brittle and prone to errors as the application grows, so it may be better to start out with a more centralised solution.

If you’re familiar with Flux and Redux, Vuex works in much the same way. State is held in one centralised location and is linked to the main Vue application. Everything that happens within the application is reflected somewhere within that state. Components can select what information is relevant to them and be notified if it changes, much like if it was part of its internal state.

A Vuex store is made up of four things: the state, getters, mutations and actions. The state is a single object that holds all the necessary data for the entire application. The way this object gets structured depends on the project, but would typically hold at least one value for each view.

Getters work like computed properties do inside components. Their value is derived from the state and any parameters passed into it. They can be used to filter lists without having to duplicate that logic inside every component that uses that list.

The state cannot be edited directly. Any updates must be performed through mutation methods supplied inside the store. These are usually simple actions that perform one change at a time. Each mutation method receives the state as an argument, which is then updated with the values needed to change.

Mutations need to be synchronous in order for Vuex to understand what has changed. For asynchronous logic – like a server call – actions can be used instead. Actions can return Promises, which lets Vuex know that the result will change in the future as well as enabling developers to chain actions together.

To perform a mutation, they have to be committed to the store by calling commit() and passing the name of the mutation method required. Actions need to be dispatched in a similar way with dispatch().

It’s good practice to have actions commit mutations rather than commit them manually. That way, all updating logic is held together in the same place. Components can then dispatch the actions directly, so long as they are mapped using the mapActions() method supplied by Vuex.

To avoid overcomplicating things, the store can also be broken up into individual modules that look after their own slice of the state. Each module can register its own state, getters, mutations and actions. State is combined between each module and grouped by their module name, in much the same way as combineReducers() works within Redux.pport

02. Explore lazy load routes

By default, the entire contents of the application end up inside one JavaScript file, which can result in a slow page load. A lot of that content is never used on the first screen the user visits. Instead it can be split off from the main bundle and loaded in as and when needed.

Vue makes this process incredibly simple to set up, as vue-router has built-in support for lazy loading.

Vue supports using dynamic imports to define components. These return Promises, which resolve to the component itself. The router can then use that component to render the page like normal. These work alongside code splitting built in to Webpack, which makes it possible to use features like magic comments to define how components should be split.

Next page: React

Best for:

  • 
Sites and applications with complex view logic
  • Quick prototypes with a low barrier to entry

Launched in 2013, React is maintained by Facebook and Instagram, alongside a community of developers. It's component-based and declarative, and you can also use it to power mobile apps via React Native

Here, we'll explain how to keep your code clean by separating your concerns, move contents outside of the root component, and ensure errors don't destabilise your application.

Use container and presentational components

As with any project, it's important to keep a separation of concerns. All React applications start off simple. As they grow, it can be tempting to keep adding logic to the same few components. In theory, this simplifies things by reducing the number of moving parts. When problems arise, however, these large components become prone to errors that are difficult to debug.

React and JSX encourage the creation on multiple small components to keep things as simple as possible. While breaking the interface down into smaller chunks can help with organisation, having a further separation between how a component works and what it looks like provides greater flexibility.

Container and presentational components are special names given to this separation. The container's job is to manage state and deal with interfacing with other parts of the application such as Redux, while the presentational component deals solely with providing the interface.

A container component will often be in charge of a small section of the UI, like a tweet. It will hold all the workings of that component – from storing state, like the number of likes, to the methods required for interaction, such as a mechanism for liking that tweet.

If the application makes use of external libraries, include at this point. For example, Redux's connect method would provide the container with a way of dispatching actions to the store without worrying the presentational component.

Containers will never render their 
own UI and will instead render 
another component – the presentational component.

This component will be passed props that detail all the information needed to render the view. If it needs to provide interactivity, the container will then pass down methods for this as well, which can be called like any other method.

Having this separation encourages developers to keep things as simple 
as possible. If a container is starting 
to grow too large, it makes it easy 
to break off into a smaller set 
of components.

If the inner workings of a component, such as its state, needs to change, this technique allows the presentational component to remain unaffected. This also means this component can be used somewhere else in the application without needing to adjust how it functions. As long as it keeps getting served the same data it will continue to work.

Render with portals

React 16 introduced the ability to return lots of different types of data from a component. While previously it had to be either a single component or 'null', the latest version allows strings, numbers, arrays and a new concept called 'portals'.

The return value of a render() method decides what React displays, which is shown at that point in the component hierarchy. Portals allow React to render any of these return types outside of the component they were called from.

These can be other parts of the page completely separate from the main application. They still form part of React and work just the same as any component, but are able to reach outside of the normal confines of 
the root container.

A typical use case of this technique would be to trigger modal windows. To get correct positioning, overlay 
and accessibility requirements out 
of a modal it ideally needs to sit as a direct descendant of the <body>. The problem is, the root of a single page application will likely take up that position. Components managing modals will either need to trigger something in the root component, or render it out of place.

Here the Modal component returns a portal. The create function for it takes two arguments – what needs to be rendered and where it should render it. The second parameter is a regular DOM node reference, rather than anything specific to React.

In this example, it references a <div> at the top of the DOM tree that is a sibling of the main app container. It is possible to target any node, visible or not, as with any JavaScript. To use it, another component can summon Modal just like any other component. It will 
then display its contents in the targeted node. 

Because React events are synthetic, they are capable of bubbling up from the portal contents to the containing component, rather than the DOM node they are rendered in. In the modal example, this means that 
the summoning component can 
also handle its state, such as its visibility or contents.

Establish error boundaries

Unhandled errors can cause havoc in a JavaScript application. Without catching them as they happen, methods can stop executing half way. This can cause unpredictable behaviour if the user continues and is a bad experience all around.

Previous versions of React did not cope with these situations well. If an error occurred in a nested component, it would leave its parents in limbo. The component state object would be stuck in the middle of performing an operation that could end up locking up the interface.

As of version 16, the way React handles errors has changed. Now an error inside any component would unmount the entire application. While that would stop issues arising with an unstable state, it doesn't lend itself well to a good user experience.

To avoid this, we can create a special component called an error boundary to ring-fence parts of the application from the rest. Any errors that happen inside children of the boundary will not cause issues to those outside of it.

Error boundaries work a lot like typical catch blocks in JavaScript. When an error occurs somewhere inside the component tree, it will be caught by the componentDidCatch() method, which receives the error thrown along with a stack trace. When that gets called it is an opportunity to replace the tree with a fresh interface – typically an error message. 

Since it only renders its children, this component 
can wrap others 
to catch any errors that happen within it. The components chosen for this will vary by application, but error boundaries can be placed wherever they are needed, including inside other boundaries.

Error boundary components shouldn't be too complicated. If an error occurs inside of a boundary, it will bubble up to the next boundary up. Failing that, it will unmount the whole application as usual.

Next page: AngularJS

Best for:

  • Large projects in need of structure
  • 
Applications with lots of changing data

AngularJS is an open source frontend web application framework developed by Google. It offers declarative templates with data-binding, MVW, MVVM, MVC, and dependency injection, all implemented using pure client-side JavaScript. 

Here, we'll show you how to use AngularJS to create reusable code blocks known as custom decorators, serve content to your users quicker, and create performant and easy to control animations with ease.

Create custom decorators

TypeScript is a superset that sits on top of JavaScript. It supplies features such as static typing, classes and interfaces that are lacking in the native language. This means that when creating large applications developers can get feedback on how best to work with external code and avoid unnecessary bugs.

Angular is built exclusively on top of TypeScript, so it is important to understand how to utilise it correctly. Combining the strengths of both provides a solid foundation for the application as it grows. There are not many better techniques to demonstrate this than with decorators.

Decorators are special functions designed to supply behaviour to whatever it is applied to. Angular makes extensive use of them to provide hints to the compiler, like with @Component on classes or @Input on properties. 

The aim is to make these functions as reusable as possible and are often used to provide utility functions, such as logging. In the example above, @ClassLogger is supplied to a component to log to the console when certain lifecycle hooks are fired. This could be applied to any component to track its behaviour.

The ClassLogger example above returns a function, which enables us 
to customise the behaviour of the decorator as it is created. This is known as the decorator factory pattern, which is used by Angular 
to create its own decorators. 

To apply a decorator, it needs to be positioned just before what it is decorating. Because of the way they are designed, decorators can be stacked on top of each other, including Angular's own. TypeScript will chain these decorators together and combine their behaviours.

Decorators are not just limited to classes. They can be applied to properties, methods and parameters inside of them as well. All of these follow similar patterns, but are slightly different in their implementations.

This is an example of a plain method decorator. These take three arguments – the object targeted, the name of the method, and the descriptor that provides details on its implementation. By hooking into the value of that descriptor we can replace the behaviour of the method based on 
the needs of the decorator.

Build platform-level animations

Animations are a great way to introduce a friendly side to an interface. But trying to control animations in JavaScript can be problematic. Adjusting dimensions like height is bad for performance, while toggling classes 
can quickly get confusing. The Web Animations API is a good approach, but working with it inside Angular can be tricky.

Angular provides a module that enables components to be animated by integrating with the properties already within the class. It uses a syntax similar to CSS-based animations, which gets passed in as component metadata.

Each animation is defined by a 'trigger' – a grouping of states and transition effects. Each state is a string value that, when matched, applies the associated styles to the element. The transition values define different ways the element should move between those states. In this example, once the value bound to hidden evaluates to true, the element will shrink out of view.

Two other special states are also defined: void and *. The void state relates to a component that was not in the view at the time and can be used to animate it in or out. The wildcard * will match with any state and could be used to provide a dimming effect while any transition occurs.

Inside the template, the trigger is bound to a value within the component that represents the state. As that value changes, as does the state of the animation.

That bound value can be supplied either as a plain property or as the output of a method, but the result needs to evaluate into a string that can be matched against an animation state.

These animations also provide callbacks such as when they start 
or stop. This can be useful for removing components that are 
no longer visible. 

Serve content quicker with server rendering

HTML parsers struggle with JavaScript frameworks. Web crawlers are often not sophisticated enough to understand how Angular works, so they only see a single, blank element and not the whole application.

By rendering the application on the server, it sends down an initial view for the users to look at while Angular and the rest of the functionality downloads in the background. Once the application arrives, it silently picks up from where the server left off.

The tools needed to achieve this in Angular are now a native part of the platform as of version 4. With a bit of set up, any application can be server rendered with just a few tweaks.

Both server and browser builds need their own modules, but share a lot of common logic. Both need a special version of BrowserModule, which allows Angular to replace the contents on-screen when it loads in. The server also needs ServerModule to generate the appropriate HTML. 

Servers also need their own entry points where they can bootstrap their unique behaviours as necessary. That behaviour depends on the app, but will also likely mirror much of the main browser entry point.

If using the CLI, that also needs to be aware of how to build the project for the server by pointing to the new entry point. This can be triggered by using the "--app" flag when building 
for the server.

The application is now ready to be server rendered. Implementations will vary based on the server technology used, but the base principles remain the same. For example, Angular provide an Express engine for Node, which can be used to populate the index page based on the request sent. All the server needs to do is serve that file. Server rendering is a complex subject with many edge cases (look here for more information).

Next page: Polymer

Best for:

  • Combining with other platforms and frameworks
  • Working with JavaScript standards

Polymer is a lightweight library designed to help you take full advantage of Web Components. Read on to find out how to use it to create pain-free forms, bundle your components to keep requests low and sizes small, and finally how to upgrade to the latest Polymer release: 3.0. 

Work with forms

Custom elements are part of the browser. Once they are set up they work like any native element would do on the page. Most of the time, Polymer is just bridging the gap between now and what custom elements will be capable of in the future, along with bringing features like data binding.

One place where custom elements shine is their use as form inputs. Native input types in browsers are limited at best, but provide a reliable way of sending data. In cases where a suitable input isn't available – such as in an autocomplete field, for example – then custom elements can  provide a suitable drop-in solution.

As their work is performed within the shadow DOM, however, custom input values will not get submitted alongside regular form elements like usual. Browsers will just skip over them without looking at their contents.

One way around this is to use an <iron-form> component, which is provided by the Polymer team. This component wraps around an existing form and will find any values either as a native input or custom element. Provided a component exposes a form value somewhere within the element, it will be detected and sent like usual.

In cases where a custom element does not expose an input, it's still possible to use that element within a form, provided it exposes a property that can be bound to.

If <my-input> exposes a property like "value" to hook into we can pull that value out as part of a two-way binding. The value can then be read out into a separate hidden input as part of the main form. It can be transformed at this point into a string to make it suitable for form transmission. Forms not managed by Polymer that would need to make use of these bindings, the Polymer team also provide a <dom-bind> component to automatically bind these values.

Bundle components

One of Polymer's biggest advantages is that components can be imported and used without any need for a build process. As optimised as these imports may be, each component requires a fresh request, which slows things down. While HTTP/2 would speed things up in newer browsers, those who do not support it will have a severely degraded experience. For those users, files should be bundled together.

If a project is set up using the Polymer CLI, bundling is already built in to the project. By running polymer build, the tool will collect all components throughout the project and inline any subcomponents they use. 

This cuts down on requests, removes unnecessary comments and minifies to reduce the file size. It also has the added benefit of creating separate bundles for both ES5 and ES2015 to support all browsers.

Outside of Polymer CLI, applications can still be bundled using the separate Polymer Bundler library. This works much like the CLI, but is more of a manual process. By supplying a component, it will sift through the imports of the file, 
inline their contents, and output a bundled file. 

Polymer Bundler has a few separate options to customise the output. For example, developers can choose to keep comments or only inline specific components. 

Upgrade to Polymer 3.0

The philosophy behind Polymer is to 'use the platform': instead of fighting against browser features, work with them to make the experience better for everyone. HTML imports are a key part of Polymer 2, but are being removed from the web components specification moving forward.

Polymer 3.0 changes the way that components are written to work with more established standards. While no breaking changes are made with the framework itself, it's important 
to know how the syntax changes 
in this new version.

First thing to note is that Polymer is migrating away from Bower as a package manager. To keep up with the way developers work, npm will become the home of Polymer, as well as any related components in the future.

To avoid using HTML imports, components are imported as JavaScript modules using the existing standardised syntax.

The major difference inside a component is that the class is now exported directly. This enables the module import <script> tag to 
work correctly. Any other components can be included by using ES2015 import statements within this file.

Finally, templates have been moved into the class and work 
with template literals. A project by the Polymer team called lit-html is working to provide the same flexibility as <template> tags 
along with the efficiency of 
selective DOM manipulation.

Next page: Aurelia

Best for:

  • Simple applications with 
little setup
  • Developing alongside 
web standards

Aurelia is a JavaScript client framework for web, mobile and desktop. It's written with next-gen ECMAScript, integrates with Web Components and has no external dependencies. 

Read on for two mini-tutorials, showing you how to change how properties display value and function, and how to use Aurelia to check values in forms. 

01. Use value converters

Sometimes, when developing components, the values being stored do not lend themselves well to being displayed in a view. A Date object, for example, has an unhelpful value when converted to a string, which requires developers to make special conversion methods just to show values correctly.

To get around this problem, Aurelia provides a mechanism to use classes to change values, known as value converters. These can take any kind of value, apply some kind of processing to it, and output that changed value in place of the original. 

They work similar to pipes in Angular or filters in template languages like Twig.

Most will be one way – from the model to the view. But they can also work the other way. The same logic applies, but by using fromView instead of toView, values can be adjusted before they are returned back to the model. 

A good use-case for this would be to format user input directly from the bind on the element. In this example, it will capitalise every word that is entered, which may be useful for a naming field.

They can also be chained together, which encourages the creation of composable converters that can have different uses across the application. One converter could filter an array of values, which then passes to another that sorts them.

Converters can also be given simple arguments that can alter the way they behave. Instead of creating different converters to perform similar filtering, create one that takes the type of filter to be performed as an argument. While only one argument is allowed, they can be chained together to achieve the same effect.

02. Try framework-level form validation

Validation is an important part of any application. Users need to be putting the correct information into forms for everything to work correctly. If they do not, they should be warned of the fact as early as possible.

While validation can often be a tricky process, Aurelia has support for validating properties built right into the framework. As long as form values are bound to class properties, Aurelia can check that they are correct whenever it makes sense to the application.

Aurelia provides a ValidationController, which takes instructions from the class, looks over the associated properties and supplies the template with any checks that have failed.

Each controller requires a single ValidationRules class that defines what's to be checked. These are all chained together, which enables the controller to logically flow through the checks dependant on the options that are passed.

Each ruleset begins with a call to ensure(), which takes the name of 
the property being checked. Any commands that follow will apply 
to that property.

Next are the rules. There are plenty of built-in options like required() or email() that cover common scenarios. Anything else can use satisfies(), which takes a function that returns either a Boolean or a Promise that passes or fails the check.

After the rules come any customisations of that check, for example the error message to display. Rules provide default messages, but these can be overridden if necessary. 

Finally, calling on() applies the ruleset to the class specified. If it is being defined from within the constructor of the class, it can be called with this instead.

By default, validation will be fired whenever a bound property's input element is blurred. This can be changed to happen either when the property changes, or it can be triggered manually.

We've covered five of the biggest JavaScript frameworks, but it's important to note that they are by no means the be all and end all. There are always new frameworks being built and there are some that are still used heavily in the industry but don't garner as much press as the biggest players.

06. Ember.js 

Strengths

  • Handlebars for templates
  • Battle tested in production

Weaknesses

  • Opinionated
  • Sharp learning curve

Ember.js has been around for over a decade now and is used in some form in Apple's iCloud (as the SproutCore framework) as well as at LinkedIn. It has been tried and tested in enormous applications used by millions. 

The Ember Data module is praised for its power and sophistication: modelled after Rails Active Record, it offers a simple interface for persisting database structure in frontend applications.

Some developers will find Ember's strong opinions too restrictive. As with Angular, Ember presents a fairly steep learning curve that can serve as a deterrent to adoption. But it is still a full featured and modern JavaScript framework that has already held up under the weight of enormous consumer-facing applications.

07. Preact

Strengths

  • Tiny (like itty bitty)
  • Can use class instead of className

Weaknesses

  • Same as React, but not React

Preact is the minimalist's version of React. Weighing in at only 3KB, it has the same API and the same strengths that React's focus on components brings to UI composition. While Preact sports the same virtual DOM as React, its 'diffing' algorithm is different and Preact claims it is "one of the fastest Virtual DOM libraries out there".

Preact is not the same as React. It does not enjoy the same adoption that React does and, with its nearly carbon-copy API, it becomes less clear why developers might choose a smaller project like Preact over React. Still, Preact's focus on performance and its minuscule size make it a compelling alternative to developers looking to eke out every last drop of performance.

08. Cycle.js 

Strengths

  • Precise state management
  • Functional programming paradigm

Weaknesses

  • It's very new

Momentum in the explosion of JavaScript frameworks has slowed, but Cycle.js is one of the newer players to emerge. Cycle is built on the concept of pure functional programming and streams. It then uses something called 'drivers' to handle 'effects' that occur in the code. These effects are things like changes to the HTML.

Cycle is so new that it's hard to gauge its strengths or weaknesses. It also uses a lot of terminology that's difficult to understand. While Cycle says it is easy to learn, it is also difficult to understand what it is actually doing. Cycle is new, at the cutting edge in its implementation and design. Developers looking to the bleeding edge of JavaScript should keep their eye on this one.

09. jQuery

Strengths

  • Simple
  • No build step

Weaknesses

  • No 'binding'
  • Not a true framework

Yes, you read that right: jQuery. The library that arguably vaulted JavaScript into the stratosphere is still around. jQuery is still the easiest way to directly manipulate an HTML page, perform common tasks like Ajax calls and reliably work with collections. Many a developer still pines for the days when one simple jQuery in a page was all you needed to build an application.

It isn't a full framework. There is no 'binding' between the HTML and the JavaScript, so state changes are managed by the developer. But developers should consider jQuery, especially if building applications that are small or inside of runtimes like Chrome Extensions, which often preclude the use of libraries like Vue, Angular or React because of their content security policy.

Read more:



Contributer : Creative Bloq
9 of the best JavaScript frameworks 9 of the best JavaScript frameworks Reviewed by mimisabreena on Friday, October 26, 2018 Rating: 5

No comments:

Sponsor

Powered by Blogger.