Open jywarren opened 6 years ago
Sure! Thanks.
@jywarren @Divy123 seeing as this is one of the GSoC projects this year and is already being worked upon, is this still open to incoming proposals or shall I draft one pertaining to real-world use cases of IS? I have the necessary research for both, but this looks like it's already being worked upon?
I think you can go on with real world use cases one if you want to and I would be really thankful if you can share your research here as I have already started working on this. Also I think we can discuss this with Jeff. Thanks!! @jywarren your thoughts please.
I also think it's fine to assume that even if we have an initial implementation for any of these, we might want to refine it as part of SoC!
On Tue, Mar 19, 2019 at 12:15 PM Slytherin notifications@github.com wrote:
I think you can go on with real world use cases one and I would be really thankful if you can share your research here.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/publiclab/image-sequencer/issues/216#issuecomment-474451020, or mute the thread https://github.com/notifications/unsubscribe-auth/AABfJyWUkE6nRchSkKv1-ZIDLCh7-lOOks5vYQzxgaJpZM4TceLs .
I also think it's fine to assume that even if we have an initial implementation for any of these, we might want to refine it as part of SoC!
Can you please explain a bit more on this?
Actually me and Vibhor both are applying for GSoC. So what's your view how should we proceed on this one?
Oh i just mean that -- think of any feature, the first version can be good but may still be improve-able, right, so there's almost always room to expand on a feature later. Coming up with a solution now need not mean there can't be a Summer of Code proposal for it later too!
On Tue, Mar 19, 2019 at 3:17 PM Slytherin notifications@github.com wrote:
I also think it's fine to assume that even if we have an initial implementation for any of these, we might want to refine it as part of SoC! Can you please explain a bit more on this?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/publiclab/image-sequencer/issues/216#issuecomment-474535518, or mute the thread https://github.com/notifications/unsubscribe-auth/AABfJ53xu-C2z9W2dP8CWgz6y90G0TgZks5vYTetgaJpZM4TceLs .
And we also allow multiple people working on the same project - we advise folks to find ways to break up a problem into parts for better collaboration.
On Tue, Mar 19, 2019 at 3:19 PM Jeffrey Warren jeff@unterbahn.com wrote:
Oh i just mean that -- think of any feature, the first version can be good but may still be improve-able, right, so there's almost always room to expand on a feature later. Coming up with a solution now need not mean there can't be a Summer of Code proposal for it later too!
On Tue, Mar 19, 2019 at 3:17 PM Slytherin notifications@github.com wrote:
I also think it's fine to assume that even if we have an initial implementation for any of these, we might want to refine it as part of SoC! Can you please explain a bit more on this?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/publiclab/image-sequencer/issues/216#issuecomment-474535518, or mute the thread https://github.com/notifications/unsubscribe-auth/AABfJ53xu-C2z9W2dP8CWgz6y90G0TgZks5vYTetgaJpZM4TceLs .
And we also allow multiple people working on the same project - we advise folks to find ways to break up a problem into parts for better collaboration.
@jywarren I think that can be the best way to go on. Thanks
You can pass in an unit8Array or float32Array or any array. An example would be pixels.data array which has a size of widthheightchannels. This array can be given to the readPixels method of the module. Also this module provides a full canvas context, so any canvas methods available in the browser should be available here. I hope this helps.
@HarshKhandeparkar you said that it provides full canvas context, so like if I do canvas.toDataURI() in browser, what can be done in here? Can you please explain a bit here?
It is the full canvas API Available in nodejs. Whatever works in the browser will work here.
you said that it provides full canvas context, so like if I do canvas.toDataURI() in browser, what can be done in here?
Same can be done
But canvas is not there so gl.?
Yes. You can also assign it to a var named canvas
Means instead I can do
var canvas = require('gl')(....)
Yes
Coool!!
@Divy123 i found something important. Please have a look at it. https://github.com/stackgl/headless-gl/blob/master/README.md#system-dependencies
I looked into. Do you find some problem in this?
You need to be careful as this might not work on every system. Might need some system dependencies to be installed
For sure, Thanks.
Looks like for Travis it may require a travis.yml change... https://github.com/stackgl/headless-gl/blob/master/README.md#how-can-i-use-headless-gl-with-a-continuous-integration-service https://github.com/stackgl/headless-gl/blob/master/README.md#how-can-i-use-headless-gl-with-a-continuous-integration-service
This is great news as it answers our questions about needing a GPU on a cloud container!
On Wed, Mar 20, 2019, 7:21 AM Slytherin notifications@github.com wrote:
For sure, Thanks.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/publiclab/image-sequencer/issues/216#issuecomment-474788803, or mute the thread https://github.com/notifications/unsubscribe-auth/AABfJ4FWd3HC1rZsWIaWoacCteeekrFUks5vYhmlgaJpZM4TceLs .
I think this can be solved now without any worries. Working on webgl-distort in progress.. Cc:@jywarren
Here are the steps I am planning to take with converting webgl-distort to be node compatible: 1). Creating an image with jsdom 2) Creating headless-gl context. 3)Adding Image to the texture. 4)Applying perspective with the given matrices. 5)Converting gl image texture back to dataURI .
I am a bit unsure on point 4 above. @jywarren @tech4GT @HarshKhandeparkar can you please tell if I am in the right direction?
That's right! Try just moving a couple corners by a hundred pixels or so, something simple. The module will accept new x,y coordinates for each of the four corners.
Sounds great!
On Wed, Mar 20, 2019, 9:07 AM Slytherin notifications@github.com wrote:
Here are the steps I am planning to take with converting webgl-distort to be node compatible: 1). Creating an image with jsdom 2) Creating headless-gl context. 3)Adding Image to the texture. 4)Applying perspective with the given matrices. 5)Converting gl image texture back to dataURI .
I am a bit unsure on #4 https://github.com/publiclab/image-sequencer/issues/4. @jywarren https://github.com/jywarren @HarshKhandeparkar https://github.com/HarshKhandeparkar can you please tell if I am in the right direction?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/publiclab/image-sequencer/issues/216#issuecomment-474820468, or mute the thread https://github.com/notifications/unsubscribe-auth/AABfJzz7QDamlhgETpQ-5C75i2eSo1HAks5vYjKBgaJpZM4TceLs .
Try translation first to test if headless-gl works.
Try tanslating or rotating using simple matrices first. Even scale is ok
Sure @HarshKhandeparkar .
This is some of the code I have written to apply some basic filters to an image in headless-gl. But I am pretty unsure about how to fetch the results out of it. There we do have canvas in webGL but here I don't know. @jywarren please help in this.
const jsdom = require("jsdom");
const { JSDOM } = jsdom;
const { document } = (new JSDOM(`...`)).window;
const image = document.createElement('img');
image.crossOrigin = "anonymous";
image.src = './examples/example-1024.jpg';
image.onload = function () {
var width = 64
var height = 64
var gl = require('gl')(width, height, { preserveDrawingBuffer: true })
gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight);
gl.clearColor(1.0, 0.8, 0.1, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT);
const vertShaderSource = '
attribute vec2 position;
varying vec2 texCoords;
void main() {
texCoords = (position + 1.0) / 2.0;
texCoords.y = 1.0 - texCoords.y;
gl_Position = vec4(position, 0, 1.0);
}
';
const fragShaderSource = '
precision highp float;
varying vec2 texCoords;
uniform sampler2D textureSampler;
void main() {
float warmth = -0.2;
float brightness = 0.2;
vec4 color = texture2D(textureSampler, texCoords);
color.r += warmth;
color.b -= warmth;
color.rgb += brightness;
gl_FragColor = color;
}
';
const vertShader = gl.createShader(gl.VERTEX_SHADER);
const fragShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(vertShader, vertShaderSource);
gl.shaderSource(fragShader, fragShaderSource);
gl.compileShader(vertShader);
gl.compileShader(fragShader);
const program = gl.createProgram();
gl.attachShader(program, vertShader);
gl.attachShader(program, fragShader);
gl.linkProgram(program);
gl.useProgram(program);
const vertices = new Float32Array([
-1, -1,
-1, 1,
1, 1,
-1, -1,
1, 1,
1, -1,
]);
const vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
const positionLocation = gl.getAttribLocation(program, 'position');
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(positionLocation);
const texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
var getPixels = require("get-pixels")
getPixels('./examples/example-1024.jpg', function(err, pixels) {
if(err) {
console.log("Bad image path")
return
}
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 2,2,gl.RGBA,gl.UNSIGNED_BYTE, new Uint8Array(pixels.data));
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.drawArrays(gl.TRIANGLES, 0, 6);
console.log(pixels.data)
})
}();
@HarshKhandeparkar
That is a lot to digest. I will answer in a few hrs. I am busy. Sorry.
No issues . Thanks
This is some of the code I have written to apply some basic filters to an image in headless-gl. But I am pretty unsure about how to fetch the results out of it. There we do have canvas in webGL but here I don't know. @jywarren please help in this.
const jsdom = require("jsdom");
const { JSDOM } = jsdom;
const { document } = (new JSDOM(...
)).window;
const image = document.createElement('img'); image.crossOrigin = "anonymous"; image.src = './examples/example-1024.jpg'; image.onload = function () {
var width = 64 var height = 64 var gl = require('gl')(width, height, { preserveDrawingBuffer: true })
gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight); gl.clearColor(1.0, 0.8, 0.1, 1.0); gl.clear(gl.COLOR_BUFFER_BIT);
const vertShaderSource = ' attribute vec2 position; varying vec2 texCoords; void main() { texCoords = (position + 1.0) / 2.0; texCoords.y = 1.0 - texCoords.y; gl_Position = vec4(position, 0, 1.0); } ';
const fragShaderSource = ' precision highp float; varying vec2 texCoords; uniform sampler2D textureSampler; void main() { float warmth = -0.2; float brightness = 0.2; vec4 color = texture2D(textureSampler, texCoords); color.r += warmth; color.b -= warmth; color.rgb += brightness; gl_FragColor = color; } ';
const vertShader = gl.createShader(gl.VERTEX_SHADER); const fragShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(vertShader, vertShaderSource); gl.shaderSource(fragShader, fragShaderSource);
gl.compileShader(vertShader); gl.compileShader(fragShader);
const program = gl.createProgram(); gl.attachShader(program, vertShader); gl.attachShader(program, fragShader);
gl.linkProgram(program);
gl.useProgram(program);
const vertices = new Float32Array([ -1, -1, -1, 1, 1, 1,
-1, -1,
1, 1,
1, -1,
]);
const vertexBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
const positionLocation = gl.getAttribLocation(program, 'position');
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(positionLocation);
const texture = gl.createTexture(); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, texture);
var getPixels = require("get-pixels")
getPixels('./examples/example-1024.jpg', function(err, pixels) { if(err) { console.log("Bad image path") return } gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 2,2,gl.RGBA,gl.UNSIGNED_BYTE, new Uint8Array(pixels.data)); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.drawArrays(gl.TRIANGLES, 0, 6); console.log(pixels.data)
})
}();
@jywarren Please look into this.
Hmm, is there a gl.toDataURL method?
canvas.toDataURL('image/png')
On Mon, Mar 25, 2019 at 2:15 PM Slytherin notifications@github.com wrote:
This is some of the code I have written to apply some basic filters to an image in headless-gl. But I am pretty unsure about how to fetch the results out of it. There we do have canvas in webGL but here I don't know. @jywarren https://github.com/jywarren please help in this.
const jsdom = require("jsdom"); const { JSDOM } = jsdom; const { document } = (new JSDOM(...)).window;
const image = document.createElement('img'); image.crossOrigin = "anonymous"; image.src = './examples/example-1024.jpg'; image.onload = function () {
var width = 64 var height = 64 var gl = require('gl')(width, height, { preserveDrawingBuffer: true })
gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight); gl.clearColor(1.0, 0.8, 0.1, 1.0); gl.clear(gl.COLOR_BUFFER_BIT);
const vertShaderSource = ' attribute vec2 position; varying vec2 texCoords; void main() { texCoords = (position + 1.0) / 2.0; texCoords.y = 1.0 - texCoords.y; gl_Position = vec4(position, 0, 1.0); } ';
const fragShaderSource = ' precision highp float; varying vec2 texCoords; uniform sampler2D textureSampler; void main() { float warmth = -0.2; float brightness = 0.2; vec4 color = texture2D(textureSampler, texCoords); color.r += warmth; color.b -= warmth; color.rgb += brightness; gl_FragColor = color; } ';
const vertShader = gl.createShader(gl.VERTEX_SHADER); const fragShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(vertShader, vertShaderSource); gl.shaderSource(fragShader, fragShaderSource);
gl.compileShader(vertShader); gl.compileShader(fragShader);
const program = gl.createProgram(); gl.attachShader(program, vertShader); gl.attachShader(program, fragShader);
gl.linkProgram(program);
gl.useProgram(program);
const vertices = new Float32Array([ -1, -1, -1, 1, 1, 1,
-1, -1, 1, 1, 1, -1,
]);
const vertexBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
const positionLocation = gl.getAttribLocation(program, 'position');
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(positionLocation);
const texture = gl.createTexture(); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, texture);
var getPixels = require("get-pixels")
getPixels('./examples/example-1024.jpg', function(err, pixels) { if(err) { console.log("Bad image path") return } gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 2,2,gl.RGBA,gl.UNSIGNED_BYTE, new Uint8Array(pixels.data)); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.drawArrays(gl.TRIANGLES, 0, 6); console.log(pixels.data)
}) }();
@jywarren https://github.com/jywarren Please look into this.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/publiclab/image-sequencer/issues/216#issuecomment-476317958, or mute the thread https://github.com/notifications/unsubscribe-auth/AABfJ9i8dm5lYEV1Ux9eoU3WW7H_F1-bks5vaRIqgaJpZM4TceLs .
I think there is a canvas.toDataUri()
method. @Divy123
Oh we both answered together. Ok I googled it. My answer is wrong. It is toDataURL()
there is one on canvas but here there is no such method available. @HarshKhandeparkar
:-))))))
On Mon, Mar 25, 2019 at 2:19 PM Harsh Khandeparkar notifications@github.com wrote:
I think there is a canvas.toDataUri() method. @Divy123 https://github.com/Divy123
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/publiclab/image-sequencer/issues/216#issuecomment-476319322, or mute the thread https://github.com/notifications/unsubscribe-auth/AABfJ-UB2DY_0p-3SCSdWf8NcVtA-13Iks5vaRMTgaJpZM4TceLs .
Are you sure? Did you try it out?
yes
The docs for headless-gl say that it returns a WebGLRenderingContext
object. Here is a link to MDN docs about WebGLRenderingContext
But I console.logged this and didn't get anything.
On Mon 25 Mar, 2019, 11:56 PM Harsh Khandeparkar <notifications@github.com wrote:
The docs for headless-gl say that it returns a WebGLRenderingContext object. Here https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext is a link to MDN docs about WebGLRenderingContext
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/publiclab/image-sequencer/issues/216#issuecomment-476322284, or mute the thread https://github.com/notifications/unsubscribe-auth/Adx0r8lw2el4xakGIAo_ebRXZR_PMLWWks5vaRTYgaJpZM4TceLs .
Try doing
console.log(typeof gl.toDataURL)
ok wait!!
Still undefined @HarshKhandeparkar
Even I 'm not sure now. Try searching for some other suitable method in the MDN docs link i just gave you. Maybe get-pixels
accepts a buffer as an input? I'm not sure..
I am a bit unsure about how to get the changed pixels out of the gl format like you can see the code above here. Like there we have the context from canvas but here we directly do have gl. @HarshKhandeparkar @jywarren Is there a way even in webGL that I can have the changed texture data as per the transformations without using canvas.
Actually webgl is a variant of 2d canvas, so it always has a canvas context. Let me look a sec..
On Mon, Mar 25, 2019, 2:40 PM Slytherin notifications@github.com wrote:
I am a bit unsure about how to get the changed pixels out of the gl format like you can see the code above here. Like there we have the context from canvas but here we directly do have gl. @HarshKhandeparkar https://github.com/HarshKhandeparkar @jywarren https://github.com/jywarren Is there a way even in webGL that I can have the changed texture data as per the transformations without using canvas.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/publiclab/image-sequencer/issues/216#issuecomment-476327274, or mute the thread https://github.com/notifications/unsubscribe-auth/AABfJ_JjiBtMD5fvx6oqy99wCPkk3DnYks5vaRgRgaJpZM4TceLs .
Ah, readpixels- see the example?
https://github.com/stackgl/headless-gl#example
On Mon, Mar 25, 2019, 2:44 PM Jeffrey Warren jeff@unterbahn.com wrote:
Actually webgl is a variant of 2d canvas, so it always has a canvas context. Let me look a sec..
On Mon, Mar 25, 2019, 2:40 PM Slytherin notifications@github.com wrote:
I am a bit unsure about how to get the changed pixels out of the gl format like you can see the code above here. Like there we have the context from canvas but here we directly do have gl. @HarshKhandeparkar https://github.com/HarshKhandeparkar @jywarren https://github.com/jywarren Is there a way even in webGL that I can have the changed texture data as per the transformations without using canvas.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/publiclab/image-sequencer/issues/216#issuecomment-476327274, or mute the thread https://github.com/notifications/unsubscribe-auth/AABfJ_JjiBtMD5fvx6oqy99wCPkk3DnYks5vaRgRgaJpZM4TceLs .
https://www.npmjs.com/package/gl
This would no longer be pure JavaScript, but for some modules this is interesting. For example, the FisheyeGL module #27 can currently only be run in a browser, and webgl-distort #64 also would be this way.
Long-term project!
Hmm, maybe also these resources: