Syphon / Syphon-Framework

Syphon is a Mac OS X technology to allow applications to share video and still images with one another in realtime, instantly.
Other
460 stars 81 forks source link

M1 Video Issues #71

Closed slekens closed 2 years ago

slekens commented 2 years ago

Hello, everyone, we have an app that is running with the syphon framework, we notice that in Macs with M1 architecture can only reproduce videos with ProRes format and the other formats we cant see the image, only the audio, Could you be so kind to let us know if you have any news about the complete support on M1 or some workaround so we can work the syphon framework in Macs with M1?

We are using this metal Implementation.

Thanks for your support.

vade commented 2 years ago

This seems odd. Syphon has no dependencies on the video decoder / video codec or format. Are you properly requesting video in a specific decoded format? What is your requested pixel format?

This smells like a different problem. Syphon has full support on M1.

slekens commented 2 years ago

Yes, thank you for your quick reply, I mentioned M1 because in Intel macs works very well, so that I thought maybe that was the problem.

Where can I find the decoded format and the pixel format? Sorry to bother you but we feel lost, this part of the code was made by another team and they didn't leave documentation about that.

vade commented 2 years ago

Ah, no worries. It is odd its working on Intel for all formats but not on M1, save for Pro Res.

Some questions

Both of those consume an NSDictionary for pixel buffer formats. What are you passing?

If you can set a breakpoint for the texture, IOSurface of CVPixelBuffer / CVMetalTextureRef being emitted by your video playback device you can introspect its format which may provide clues as to what is going on.

HTH.

vade commented 2 years ago

Additionally, on the Intel machine, you may have installed Apples Pro Video Codecs pkg by way of Final Cut Pro. if that is missing, you may not have the same codecs available on M1 as you do on Intel?

Thinking aloud.

slekens commented 2 years ago

Hello thanks for the response, here are the answers:

  1. Yes we are working only with AVFoundation.
  2. We are not using a third party library.
  3. We are using proRes, h265, MPEG-4, AVC, HEVC.
  4. The first two yes, but we are not using AVPlayerItemVideoOutput.
  5. For the PixelBuffer, We have a class, that render the imageTexture but i don't see any creation of pixelBuffer or metalTexture, we have something like this:

- (CVPixelBufferRef) currentPixelBuffer{ return _pixelBuffer; }

` if(_pixelBufferPool!=nil){ if(_pixelBuffer!=nil){ [self releasePixelBuffer]; }

      CVReturn status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, _pixelBufferPool, &_pixelBuffer);
      if(status != kCVReturnSuccess){
          NSLog(@"Couldn't create pixelbuffer: %d", status);
      }

      CVPixelBufferLockBaseAddress(_pixelBuffer, 0);
      void *pxdata = CVPixelBufferGetBaseAddress(_pixelBuffer);
      glReadPixels(0, 0, _currentSize.width, _currentSize.height, GL_BGRA, GL_UNSIGNED_BYTE, pxdata);

      CVPixelBufferUnlockBaseAddress(_pixelBuffer, 0);
  }`

Thank you so much for your help!!! I hope this info can help you to understand our issue. PD: I'm installing the codecs but is still not working.

vade commented 2 years ago

When you initialize an AVAssetReader you have an AVAssetreaderTrackOutput associated for each track you are reading (video or audio , metadata etc).

What are the requested options dictionaries passed to the video track reader? Thats the info we need.

Also, judging from the above code it appears like your pixel buffers are being read back from OpenGL, not sent to OpenGL in the snippet above. This is reading OpenGL rendering and doing a GPU -> CPU transfer and populating a pixel buffer with the content of your rendering.

My assumption from your issue is that the opposite is the issue, that playing back some movie files results in no Syphon texture. So this snippet is irrelevant for the issue I think... ?

slekens commented 2 years ago

Here is the Dictionary:

NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys: [NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey, [NSNumber numberWithInt:32],AVLinearPCMBitDepthKey, [NSNumber numberWithBool:YES],AVLinearPCMIsFloatKey, [NSNumber numberWithBool:YES],AVLinearPCMIsNonInterleaved, nil];

Yeah I did send the last code because is inside of a class called SyphonLayerRenderer, we supposed that code is in charge of render to syphon.

Thanks for your help.

vade commented 2 years ago

Thats the dictionary for creating Audio outputs.

slekens commented 2 years ago

That's the only one we have, we don't have for video, maybe we need to create one?

Thank you.

vade commented 2 years ago

Well, unless you can describe exactly how are sending video to Syphon I really can't be of help with diagnosing issues. My suspicion is the problem lies out of Syphon since we know it works in other tooling sans issue. Maybe you've run into an edge case, but It sounds like there's discovery on your end on how the code you've inherited functions?

Best of luck!

slekens commented 2 years ago

Thank you so much for your time and help, I will review our code deeply and I will try to check if as you say the problem is before syphon.

vade commented 2 years ago

Of course, no worries. Feel free to re-open / comment on the issue if you find specific areas of interest. Highly suggest you debug with Xcode and run frames through and set breakpoints and introspect the content of frames via the quick look feature inside of Xcode where you can see the images and ensure they have the content you expect.

Good luck!