More information about the Underscore mailing list

[_] examples of user control for a sequence of images

Chris Dawson chris at tallhat.com
Fri Nov 2 15:14:54 GMT 2018

hi justin,
wow - thanks for the comprehensive reply - amazing to see papervision
mentioned which i never really used by i was also in the "flash"
sphere for a while.

i've just reread my post as i think i may have confused matters - i'm
not talking about rendering any 3D content on the fly (or even
pre-recorded) - it's likely to be a series (and prob <30) of static
frames of a simple sequence of movement of a real object, recorded on
a still or video camera. and it's simply giving the user the ability
to boomerang back and forth through the frames.
so yes, a spritesheet would probably work. it's nice real world
examples i'm initially looking for, plus hopefully a library/framework
to help with building these things.

i've been sent a couple of things which look interesting:

"
Below is an example of a simple image sequence trigger
http://scrollmagic.io/examples/expert/image_sequence.html
A code pen built using it.
https://codepen.io/tutsplus/pen/QEyEQv

Video controls using scroll.
Codepen that control HTML5 video using scroll, This is a fairly simple
example and you would want to cover browser / mobile support, but it
illustrates the idea.
https://codepen.io/ollieRogers/pen/lfeLc/
"

thanks :)
chris

On Fri, 2 Nov 2018 at 13:57, Justin L Mills <jlm at justinfront.net> wrote:
>
> Chris
>
> The problem is that videos store keyframes and then use fractal
> compression or similar to store difference from the last frame and so on
> all the way back to the last keyframe.
>
> So to go randomly to any frame you have to load the keyframe then all
> the difference frames before you can access the random frame, you want
> to show, this always cause a lag.
>
> In flash I used to do this sort of stuff, the trick is to keep videos
> short and get the video encoded for every frame which is perhaps 10x
> larger filesize, then once video fully loaded you can just set the cue
> head so for instance you can then allow the video to move left and right
> as the mouse moves, or you can do things like setup so that a persons
> eyes follow the mouse, but that takes even more effort as you may want
> video to cover all angles.
>
> Now another approach is to play the movie and every frame take a snap
> shot this is sometimes done to apply greenscreen real time, you will
> find examples where a video is drawn onto one canvas and then every
> frame the pixels are looped through from 0...x and 0...y and processed.
>
>
> For instance with flash I have grabbed frames from webcam over RMTP link
> and then draw them as a jigsaw, I did actually similar with html5 canvas
> and a video but was very heavy.
> You can see how frames are copied
> https://github.com/nanjizal/JigsawX/blob/master/jigsawxtargets/hxjs/JigsawDivtastic.hx#L349
>
> But as you can imagine you will struggle to encode faster than actual
> real time play, so you would have to run the whole movie at more or less
> run time before you can get all the images so that can be too long often
> for users to wait with a blank screen. Saving an image every frame is
> same as processing it - you have to copy every pixel so it's heavy.
>
> For instance this is just one frame where I use mandelbrot equation, the
> cost is not the equation but looping through all the pixels on canvas
> and writting them.
> https://rawgit.com/nanjizal/Mandelbrot/master/build/html5/index.html
>
> The code to loop through the pixels requires width and height for loops
> similar to the jigsaw example but because I use Kha and WebGL - to
> render I can easily move this in 3D axis if you wanted to create some
> form of fan effect.
> https://github.com/nanjizal/Mandelbrot/blob/master/src/Mandelbrot.hx#L31
>
> If you want to render all the frames from a video ( after you have spent
> a few minutes grabbing them ) you can render them in 3D with Canvas I
> ported this
> https://nanjizal.github.io/canvasImageTriangle/bin/index.html?update=2
> but it might be heavy for lots of images, more useful as fallback for
> webgl perhaps if your feeling like wasting a lot of time!  For 3D render
> best to use WebGL ( especially Kha ;) ).
>
> But WebGL is better for any 3D display and with Kha relatively easy as
> you can export c++ for mobile app as well as WebGL html5.
>
> Now another solution is with FFMPEG or similar you can loop through the
> video and export as PNG images, then you can put the PNG images into a
> sprite sheet, once you have a sprite sheet it's very easy to go
> imediately to any frame.  You can create spritesheets also from Flash
> Animate it has option to export frames to png's and you can normally
> break apart a video on timeline, but FFMPEG maybe better tool ( just
> harder ).
>
> This example uses two spritesheets ( I created in cs3 flash), and
> effectively just copies parts of each sheet to an onscreen to create the
> animation in webgl so two triangles per image, you can do this with
> Canvas, but I happen to use Kha ( webgl ).  Obviously you can easily
> change it to display the frame index that relates to the x position of
> mouse on screen.
> https://nanjizal.github.io/GridSheet/bin/index.html?i=1
>
> The relevant drawing code ( see TrilateralXtra for more recent improved
> version ).
> https://github.com/nanjizal/GridSheet/blob/master/src/gridSheet/GridSheet.hx#L114
>
> In this WebGL demo I convert draw commands into triangles and draw them
> onto a canvas and then animate that in 3D, normally I would use Kha as
> it's simpler to setup pipelines.
> https://nanjizal.github.io/TrilateralBazaar/demo/binWebGL/
>
> With Tess2 or PolyK and my Trilateral you can draw what shape image you
> want in 2D or 3D mapping image to triangles, including images from a
> spritesheet.  My cat:
> https://nanjizal.github.io/TrilateralBazaar/imageFill/build/html5/index.html
> with Kha I can easily distort this so you could do some crazy shaders on
> this.
>
> This video covers drawing webcam to screen
> https://haxe.org/videos/tutorials/kha-tutorial-series/episode-073-video-capture.html
> with Kha you can just setup 3D pipeline and just draw the image on a 3D
> plane instead.
>
> might be worth doing is getting your video into lots of png's and then
> import them Blender and then use it with Armory3D.  Armory3D exports to
> Kha WebGL and should give you visual control you can then attempt to add
> scripts on top.
>
> I may have gone all over the place but hopefully you will have more of a
> feel for limitations, I used to be fairly expert in papervision, away3d
> ( still viable via OpenFL ) and now moved to Kha so I am sure I can help
> you if your looking for developer, but you have to understand that video
> does not actually have all the images you want it only has some
> keyframes the rest it creates on the fly so random fast access is
> usually not possible from a standard video, you can only randomly access
> the encoded keyframes.  Spritesheets are probably the way to go.
>
> Best
>
> Justin
>
>
> On 02/11/2018 11:44, Chris Dawson wrote:
> >   user can scrub back and forth through in some way
> > (e.g. simple slider control, or drag left<>right, or connected to the
> > page scroll position)
> > a bit like a 360 product spin in a way, but more for a sequence
> > showing a product doing something - e.g. a pile of lego bricks
> > animated into the final model, or opening a pop-up book from flat to
> > the 3D 'spread', or a timelapse of a painting from blank paper to
> > finished work.
> >
> > and ideally i'm looking for a component/framework to help build this
> > sort of thing. but first step is to gather some good examples for the
> > client.
>
> --
> underscore_ list info/archive -> http://www.under-score.org.uk/mailman/listinfo/underscore