JSPM

  • Created
  • Published
  • Downloads 150
  • Score
    100M100P100Q98365F
  • License Proprietary

Component which enables ZCV

Package Exports

  • @zappar/zappar-react-three-fiber

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (@zappar/zappar-react-three-fiber) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

Zappar for React Three Fiber

This library allows you use Zappar's best-in-class AR technology with content built using the 3D rendering platform React Three Fiber.

You may also be interested in:

  • Zappar for A-Frame (@zappar/zappar-aframe)
  • Zappar's library for Unity
  • Zappar for JavaScript (@zappar/zappar), if you'd like to build content with a different 3D rendering platform
  • ZapWorks Studio, a full 3D development environment built for AR, VR and MR
  • Zappar for ThreeJS (@zappar/zappar-threejs)
  • Zappar's library for Unity

Getting Started

Bootstrap Projects

You can get started super-quickly using one of our bootstrap projects. They contain the basics of an AR experience for the different tracking types - no more, no less.

Check out these repositories that contain index.html files to get you started:
https://github.com/zappar-xr/zappar-react-three-fiber-image-tracking-webpack-bootstrap
https://github.com/zappar-xr/zappar-react-three-fiber-face-tracking-webpack-bootstrap
https://github.com/zappar-xr/zappar-react-three-fiber-instant-tracking-webpack-bootstrap

Or these repositories that have webpack setups optimized for development and deployment:
https://github.com/zappar-xr/zappar-react-three-fiber-image-tracking-webpack-bootstrap-typescript
https://github.com/zappar-xr/zappar-react-three-fiber-face-tracking-webpack-bootstrap-typescript
https://github.com/zappar-xr/zappar-react-three-fiber-instant-tracking-webpack-bootstrap-typescript

Example Projects

There's a repository of example projects for your delectation over here:
https://github.com/zappar-xr/zappar-react-three-fiber-examples

Starting Development

You can use this library by downloading a standalone zip containing the necessary files, by linking to our CDN, or by installing from NPM for use in a webpack project.

Standalone Download

Download the bundle here:
https://libs.zappar.com/zappar-react-three-fiber/0.0.11/zappar-react-three-fiber.zip

Unzip into your web project and reference from your HTML like this:

<script src="zappar-react-three-fiber.js"></script>

CDN

Reference the zappar.js library from your HTML like this:

<script src="https://libs.zappar.com/zappar-react-three-fiber/0.0.11/zappar-react-three-fiber.js"></script>

NPM Webpack Module

Run the following NPM command inside your project directory:

$ npm install --save @zappar/zappar-react-three-fiber

Then import the library into your JavaScript or TypeScript files:

import { ZapparCamera,  /* ... */ } from "@zappar/zappar-react-three-fiber";

The final step is to add this necessary entry to your webpack rules:

module.exports = {
  //...
  module: {
    rules: [
      //...
      {
        test: /zcv\.wasm$/,
        type: "javascript/auto",
        loader: "file-loader"
      }
      //...
    ]
  }
};

Overview

You can integrate the Zappar library with an existing React Three Fiber app. A typical project may look like this:

import { render } from 'react-dom';
import React, { useRef } from 'react';
import { ZapparCamera, ImageTracker, ZapparCanvas } from '@zappar/zappar-react-three-fiber';

export default function App() {
  // Setup a camera ref, as we need to pass it to the tracker.
  const camera = useRef();
  // Use Webpack to load in target file
  const targetFile = require('file-loader!./example-tracking-image.zpt').default;
  return (
    <ZapparCanvas>
      {/* Setup Zappar Camera, setting camer object's ref */}
      <ZapparCamera ref={camera} />
       {/* Setup Image Tracker, passing our target file and camera ref */}
      <ImageTracker targetImage={targetFile} camera={camera}>
      {/* Create a normal pink sphere to be tracked to the target */}
        <mesh>
          <sphereBufferGeometry />
          <meshStandardMaterial color="hotpink" />
        </mesh>
      </ImageTracker>
      {/* Normal directional light */}
      <directionalLight position={[2.5, 8, 5]} intensity={1.5} />
    </ZapparCanvas>
  );
}
render(<App />, document.getElementById('root'));

The remainder of this document goes into more detail about each of the component elements of the example above.

Local Preview and Testing

For testing, you'll want to launch the project locally, without hosting it.

Due to browser restrictions surrounding use of the camera, you must use HTTPS to view or preview your site, even if doing so locally from your computer. If you're using webpack, consider using webpack-dev-server which has an https option to enable this.

Alternatively you can use the ZapWorks command-line tool to serve a folder over HTTPS for access on your local computer, like this:

$ zapworks serve .

The command also lets you serve the folder for access by other devices on your local network, like this:

$ zapworks serve . --lan

Publishing and Hosting Content

Once you've built your site, you have a number of options for hosting it. These include, hosting with ZapWorks and self-hosting. Head over to the ZapWorks Publishing and Hosting article to learn more about these options.

Licensing

You need to maintain an activate ZapWorks subscription in order to use this library. To learn more about licensing, click here.

Setting up the Canvas

The first step when developing a React Three Fibre UAR project is replace any existing Canvas you have in your scene with the ZapparCanvas.

import { ZapparCanvas } from "@zappar/zappar-react-three-fiber";
// ...
return (
  <ZapparCanvas>
    {/** YOUR CONTENT HERE **/}
  </ZapparCanvas>
)

You may alternatively use the default react-three-fiber Canvas component, with colorManagement toggled off.

import { Canvas } from "react-three-fiber";
// ...
return (
  <Canvas colorManagement={false}>
    {/** YOUR CONTENT HERE **/}
  </Canvas>
)

Setting up the Camera

Add or replace any existing camera you have in your scene with the ZapparCamera component, and setting its ref like this:

import { ZapparCamera } from "@zappar/zappar-react-three-fiber";
// ...
const camera = useRef();

return (
  <ZapparCanvas>
    <ZapparCamera ref={camera}/>
  </ZapparCanvas>
)

You don't need to change the position or rotation of the camera yourself - the Zappar library will do this for you, automatically.

User Facing Camera

Some experiences, e.g. face tracked experiences, require the use of the user-facing camera on the device. To activate the user-facing camera, provide the userFacing prop to the ZapparCamera component:

<ZapparCamera userFacing ref={camera} />

Mirroring the Camera

Users expect user-facing cameras to be shown mirrored, so by default the ZapparCamera will mirror the camera view for the user-facing camera.

Configure this behavior with the following option:

<ZapparCamera ref={camera} userCameraMirrorMode="poses" />

The values you can pass to userCameraMirrorMode are:

  • poses: this option mirrors the camera view and makes sure your content aligns correctly with what you're tracking on screen. Your content itself is not mirrored - so text, for example, is readable. This option is the default.
  • css: this option mirrors the entire canvas. With this mode selected, both the camera and your content appear mirrored.
  • none: no mirroring of content or camera view is performed

There's also a rearCameraMirrorMode prop that takes the same values should you want to mirror the rear-facing camera. The default rearCameraMirrorMode is none.

Camera Pose

The Zappar library provides multiple modes for the camera to move around in the scene. You can set this mode with the poseMode prop of the ZapparCamera component. There are the following options:

  • default: in this mode the camera stays at the origin of the scene, pointing down the negative Z axis. Any tracked groups will move around in your scene as the user moves the physical camera and real-world tracked objects.
  • attitude: the camera stays at the origin of the scene, but rotates as the user rotates the physical device. When the Zappar library initializes, the negative Z axis of world space points forward in front of the user.
  • anchor-origin: the origin of the scene is the center of the group specified by the camera's poseAnchorOrigin prop. In this case the camera moves and rotates in world space around the group at the origin.

The correct choice of camera pose will depend on your given use case and content. Here are some examples you might like to consider when choosing which is best for you:

  • To have a light that always shines down from above the user, regardless of the angle of the device or anchors, use attitude and place a light shining down the negative Y axis is world space.
  • In an application with a physics simulation of stacked blocks, and with gravity pointing down the negative Y axis of world space, using anchor-origin would allow the blocks to rest on a tracked image regardless of how the image is held by the user, while using attitude would allow the user to tip the blocks off the image by tilting it.

Changing Default Camera

When mounted, the ZapparCamera component sets itself as the default scene camera. You can toggle this with the makeDefault prop:

<ZapparCamera makeDefault={false} />

Tracking

The Zappar library offers three types of tracking for you to use to build augmented reality experiences:

  • Image Tracking can detect and track a flat image in 3D space. This is great for building content that's augmented onto business cards, posters, magazine pages, etc.
  • Face Tracking detects and tracks the user's face. You can attach 3D objects to the face itself, or render a 3D mesh that's fit to (and deforms with) the face as the user moves and changes their expression. You could build face-filter experiences to allow users to try on different virtual sunglasses, for example, or to simulate face paint.
  • Instant World Tracking lets you tracking 3D content to a point chosen by the user in the room or immediate environment around them. With this tracking type you could build a 3D model viewer that lets users walk around to view the model from different angles, or an experience that places an animated character in their room.

Importing trackers from the package:

import { InstantTracker, ImageTracker, FaceTracker } from "@zappar/zappar-react-three-fiber";

Image Tracking

To track content from a flat image in the camera view, use the ImageTracker component:

<ImageTracker targetImage={targetFile} camera={camera}>
    {/*PLACE CONTENT TO APPEAR ON THE IMAGE HERE*/}
</ImageTracker>

The group provides a coordinate system that has its origin at the center of the image, with positive X axis to the right, the positive Y axis towards the top and the positive Z axis coming up out of the plane of the image. The scale of the coordinate system is such that a Y value of +1 corresponds to the top of the image, and a Y value of -1 corresponds to the bottom of the image. The X axis positions of the left and right edges of the target image therefore depend on the aspect ratio of the image.

Target File

ImageTrackers use a special 'target file' that's been generated from the source image you'd like to track. You can generate them using the ZapWorks command-line utility like this:

$ zapworks train myImage.png

The resulting file can then be passed as a targetFile prop to be loaded:

export default function App() {
  const camera = useRef();
  const targetFile = require('file-loader!./target.zpt').default;
  return (
    <ZapparCanvas>
      <ZapparCamera ref={camera} userCameraMirrorMode="css" />
      <ImageTracker targetImage={targetFile} camera={camera}>
          {/*PLACE CONTENT TO APPEAR ON THE IMAGE HERE*/}
      </ImageTracker>
    </ZapparCanvas>
  );
}

Events

The ImageTracker component will emit the following events on the element it's attached to:

  • onVisible - emitted when the image appears in the camera view
  • onNotVisible - emitted when the image is no longer visible in the camera view
  • onNewAnchor - emitted when a a non-previously seen before anchor appears in the camera view.

Here's an example of using these events:

<ZapparCanvas>
     <ZapparCamera ref={camera}/>
       <ImageTracker
         onVisible={(anchor) => console.log(`Visible ${anchor.id}`)}
         onNotVisible={(anchor) => console.log(`Not visible ${anchor.id}`)}
         onNewAnchor={(anchor) => console.log(`New anchor ${anchor.id}`)}
         targetImage={targetFile}
         camera={camera}
       >
          {/*PLACE CONTENT TO APPEAR ON THE IMAGE HERE*/}
       </ImageTracker>
</ZapparCanvas>

Face Tracking

To place content on or around a user's face, create a new FaceTracker component:

<ZapparCamera  ref={camera} />
<FaceTracker camera={camera}>
  {/*PLACE CONTENT TO APPEAR ON THE FACE HERE*/}
</FaceTracker>

The group provides a coordinate system that has its origin at the center of the head, with positive X axis to the right, the positive Y axis towards the top and the positive Z axis coming forward out of the user's head.

Note that users typically expect to see a mirrored view of any user-facing camera feed. Please see the section on mirroring the camera view earlier in this document.

Events

The FaceTracker component will emit the following events on the element it's attached to:

  • onVisible - emitted when the face appears in the camera view
  • onNotVisible - emitted when the face is no longer visible in the camera view

Here's an example of using these events:

<ZapparCamera ref={camera} />
<FaceTracker
  onNotVisible={(anchor) => console.log(`Not visible ${anchor.id}`)}
  onVisible={(anchor) => console.log(`Visible ${anchor.id}`)}
  camera={camera}
>
  {/*PLACE CONTENT TO APPEAR ON THE FACE HERE*/}
</FaceTracker>

Face Mesh

In addition to tracking the center of the face using FaceTracker, the Zappar library provides a number of meshes that will fit to the face/head and deform as the user's expression changes. These can be used to apply a texture to the user's skin, much like face paint, or to mask out the back of 3D models so the user's head is not occluded where it shouldn't be.

To use a face mesh, create a child mesh component, attaching FaceBufferGeometry within your FaceTracker component, like this:

import { FaceBufferGeometry, /* ... */ } from '@zappar/zappar-react-three-fiber'
// ...
const camera = useRef();
const faceTrackerGroup = useRef();
return (
  <ZapparCanvas>
    <ZapparCamera ref={camera} />
    <FaceTracker
      camera={camera}
      ref={faceTrackerGroup}
    >
      <mesh>
        {/* FaceBufferGometry requires tracker group to be passed into it */}
        <FaceBufferGeometry trackerGroup={faceTrackerGroup} />
      </mesh>
      {/*PLACE CONTENT TO APPEAR ON THE FACE HERE*/}
    </FaceTracker>
  </ZapparCanvas>
);

At this time there are two meshes included with the library. The default mesh covers the user's face, from the chin at the bottom to the forehead, and from the sideburns on each side. There are optional parameters that determine if the mouth and eyes are filled or not:

<mesh>
  <FaceBufferGeometry fillEyeLeft fillEyeRight fillMouth fillNeck trackerGroup={faceTrackerGroup} />
</mesh>

The full head simplified mesh covers the whole of the user's head, including some neck. It's ideal for drawing into the depth buffer in order to mask out the back of 3D models placed on the user's head (see Head Masking below). There are optional parameters that determine if the mouth, eyes and neck are filled or not:

<mesh>
  <FaceBufferGeometry fullHead trackerGroup={faceTrackerGroup} />
</mesh>

Head Masking

If you're placing a 3D model around the user's head, such as a helmet, it's important to make sure the camera view of the user's real face is not hidden by the back of the model. To achieve this, the library provides HeadMaskMesh. It's an entity that fits the user's head and fills the depth buffer, ensuring that the camera image shows instead of any 3D elements behind it in the scene.

To use it, add the entity into your HeadMaskMesh entity, before any other 3D content:

import { HeadMaskMesh, /* ... */ } from '@zappar/zappar-react-three-fiber'
// ...
<ZapparCamera ref={camera} />
<FaceTracker camera={camera} ref={faceTrackerGroup}>
  <HeadMaskMesh trackerGroup={faceTrackerGroup} />
  {/*OTHER 3D CONTENT GOES HERE*/}
</FaceTracker>

Instant World Tracking

To track content from a point on a surface in front of the user, use the InstantTracker component:

<InstantTracker placementMode camera={camera}>
    {/*PLACE CONTENT TO APPEAR IN THE WORLD HERE*/}
</InstantTracker>

With the placementMode prop set, the instant tracker will let the user choose a location for the content by pointing their camera around the room. When the user indicates that they're happy with the placement, e.g. by tapping a button on-screen, remove that parameter to fix the content in that location:

const camera = useRef();
const [placementMode, setPlacementMode] = useState(true);
return (
  <>
    <ZapparCanvas>
      <ZapparCamera ref={camera} />
      <InstantTracker placementMode={placementMode} camera={camera}>
        <mesh position={[0, 0, -5]}>
          <sphereBufferGeometry />
          <meshStandardMaterial color="hotpink" />
        </mesh>
      </InstantTracker>
      <directionalLight position={[2.5, 8, 5]} intensity={1.5} />
    </ZapparCanvas>
    <div id="zappar-placement-ui" onClick={() => { setPlacementMode((currentPlacementMode)=> !currentPlacementMode); }}>
      Tap here to
      {placementMode ? ' place ' : ' pick up '}
      the object
    </div>
  </>
);

The group provides a coordinate system that has its origin at the point that's been set, with the positive Y coordinate pointing up out of the surface, and the X and Z coordinates in the plane of the surface.