Closed ShoziX closed 2 years ago
According to your description, I think what you want to do is capture your phone screen or part of the screen , then transport the video raw data to others by network, I do not know if this is what you want to do.
@ShoziX
Yes thats exactly what i want to do but now the raw camera feed is being shared. And I can't seem to be able to modify the feed.
OK, I can tell you how to do this , but there may some work you need to do and I can help you if you get in trouble.
Sure I am up for it. Looking forward to your guidance.
First of all, our unity sdk do not support what you want to do right now ,but our native sdk support this. This api is named pushVideoFrame. this is the document : https://docs.agora.io/en/Video/API%20Reference/java/classio_1_1agora_1_1rtc_1_1_rtc_engine.html
So you need to wrapper native sdk if you want to call this api from unity. In fact, you may be only need to wrapper a few api if you only want call few api.
1: download the native sdk from this link : this is android sdk https://download.agora.io/sdk/release/Agora_Native_SDK_for_Android_v2_4_1_FULL.zip?_ga=2.105637827.1963274513.1560245694-1899500272.1512557670
this is ios sdk https://download.agora.io/sdk/release/Agora_Native_SDK_for_iOS_v2_4_1_FULL.zip?_ga=2.65250158.1963274513.1560245694-1899500272.1512557670
2: Our native sdk support call api through C++ header . The sdk api interface between Android and IOS is the same. If you want to call this api from Unity, you need to wrapper a C bridge like our unity sdk.
3: I can show you some sample code.
C++: void CAgoraSDKObject::createEngine(const char *appId) { RtcEngineContext engineContext; engineContext.appId = appId; engineContext.eventHandler = getCWrapperRtcEngineEventHandler(); irtcEngine = createAgoraRtcEngine(); bool init = irtcEngine->initialize(engineContext); }
C: void createEngine(const char *appId) { cAgoraSDKObject->getCAgoraSDKInstance()->createEngine(appId); }
C# [DllImport(MyLibName, CharSet = CharSet.Ansi)] protected static extern void createEngine(string appId);
This example shows how to call our native api from unity .
You can wrapper all of the api like that if you want to call this api from unity. (The api named pushVideoFrame also can be called from unity like that)
Thank you so much for your response Zhangtao. I have been looking into the sdk that you provided, actually I have not much experience in iOS development. After some research, I have found that in to run a native Objective-C code, we need .mm file containing all the functions implemented that we need. But the project your provided doesn't contain any such file. Can you please guide me that how can I implement the pushVideoFrame() function in already implemented api so that I don't have to make a wrapper of everything that I already get from this project.
I am so sorry that I can not give you all of the source code because of the security of our source code. I can tell you how to implement the pushVideoFrame().
C++
int CAgoraSDKObject::pushVideoFrame(int type, int format, void* videoBuffer, int stride, int height, int cropLeft, int cropTop, int cropRight, int cropBottom, int rotation, long timestamp)
{
if (irtcEngine)
{
agora::util::AutoPtr
C
int pushVideoFrame(int type, int format, void *videoBuffer, int stride, int height, int cropLeft, int cropTop, int cropRight, int cropBottom, int rotation, long timestamp) { return CAgoraSDKObject::getCAgoraSDKInstance()->pushVideoFrame(type, format, videoBuffer, stride, height, cropLeft, cropTop, cropRight, cropBottom, rotation, timestamp); }
C#
[DllImport(MyLibName, CharSet = CharSet.Ansi)]
public static extern int pushVideoFrame(int type, int format, byte[] videoBuffer, int stride, int height, int cropLeft, int cropTop, int cropRight, int cropBottom, int rotation, long timestamp);
In fact , our sdk support call api by C++ header, so you do not need to call Object-c in ios and call java in android , it is troublesome . You only need to call C++ header in sdk. The C++ header is the same for (Android , ios, windows, mac) in our sdk.
I totally understand how c++ headers are being used in sdk and I dont need to add code in ios or java. All I need is pushVideoFrame function in native sdk but the problem is native sdk is compiled and comes in a form of .a file with SDK so I cant open that pre-compiled code and add a function in it.
The .a you mean in the unity sdk is compiled by the ios project named agoraSdkCWrapper.In fact, it is also just a wrapper. You can also create a ios project like me.
I wrapped nearly all of the function, but you only need to wrapper that the api you need to use.
Seems like a lot of rework. Would it be possible that you put this function in your wrapper and compile .a file and forward that. So all the rework can be avoided.
Are you a personal developer or a business developer? @ShoziX @omarfarooq
Because it is a new feature for our unity sdk and we did not publish this function in unity yet. So you can contact our customer support engineer and tell him this feature. https://www.agora.io/en/ This is our official website,
We plan to complete this feature of unity sdk in next version.
@zhangtao1104 thank you for this information! I have never wrapped a native SDK but I'll have to try what you provided to see if I can get it to work.
With that said, when you plan to release the next version of the SDK with this functionality? I am definitely excited for this feature to be added!
Can you please let me know if I'm on the right path? I'm using Visual Studio:
STEP 1) Download the Android and iOS native SDK and import them into Unity project.
STEP 2) Create a new project in Visual Studio. We will need to build the project 2 times, one for Android and one for iOS correct? For Android select Installed --> Templates ---> Visual C++ ---> Cross Platform. then Android. Select Dynamic Shard Library (Android) then type the name of the project and click Ok For iOS select Installed --> Templates ---> Visual C++ ---> Cross Platform. then iOS
STEP 3) Make sure DLL is selected and unselect "Pre-Compiled Header"
STEP 4) Create a C++ source file (Let's assume I call it FirstDLL.cpp)
STEP 5) C++ source file would look like so:
#include "FirstDLL.h"
int CAgoraSDKObject::pushVideoFrame(int type, int format, void* videoBuffer, int stride, int height, int cropLeft, int cropTop, int cropRight, int cropBottom, int rotation, long timestamp){
if (irtcEngine){
agora::util::AutoPtragora::media::IMediaEngine mediaEngine;
mediaEngine.queryInterface(irtcEngine, agora::AGORA_IID_MEDIA_ENGINE);
if (mediaEngine){
ExternalVideoFrame videoFrame;
videoFrame.type = type;
videoFrame.format = format;
videoFrame.stride = stride;
videoFrame.height = height;
videoFrame.buffer = videoBuffer;
videoFrame.cropLeft = cropLeft;
videoFrame.cropTop = cropTop;
videoFrame.cropBottom = cropBottom;
videoFrame.cropRight = cropRight;
videoFrame.rotation = rotation;
videoFrame.timestamp = timestamp;
return mediaEngine->pushVideoFrame(&videoFrame);;
}else{
return -1;
}
}
return NOT_INIT_ENGINE;
}
STEP 6) Create a header file that should look as so: Do we need a header file?
#ifndef FIRSTDLL_NATIVE_LIB_H
#define FIRSTDLL_NATIVE_LIB_H
#define DLLExport __declspec(dllexport)
extern "C"{
DLLExport int CAgoraSDKObject::pushVideoFrame(int type, int format, void* videoBuffer, int stride, int height, int cropLeft, int cropTop, int cropRight, int cropBottom, int rotation, long timestamp);
}
#endif
STEP 7) Build the plugin from Visual Studio. Put the Android plugin file (not dll) into the Assets/Plugins/Android folder. The supported C++ plugin extension .so.
Note: If the name of the Android plugin is libFirstDLL-lib.so, remove the lib prefix and the .so when referencing it from C#.In this case, it would be [DllImport("FirstDLL-lib")]
If you have both armeabi-v7a and x86 Android .so plugins then put them in Assets\Plugins\Android\libs\armeabi-v7a and Assets\Plugins\Android\libs\x86 folders respectively.
Put the iOS plugin file (not dll) into the Assets/Plugins/iOS folder. The supported plugins extension are .a, .m, .mm, .c, .cpp.
Must use [DllImport ("__Internal")] instead of [DllImport("PluginName")] or [DllImport("FirstDLL")]
STEP 8) We can now call the pushVideoFrame function from C# like so:
[DllImport("FirstDLL")]
public static extern int add(int num1, int num2);
Any help or input would be awesome, I'm still learning this whole wrapper business.
Thank you!
You just cant make it work by adding this function, there comes a conflict when video call and ARkit both try to get hold of the camera.
@ShoziX You can use api named setExternalVideoSource() to stop sdk capture video frame , then you can use ARKit, the conflict won't happens . And at last, you can capture your screen frame and call api pushVideoFrame(), then you can realize real-time-video communication.
This is our api document. https://docs.agora.io/en/Video/API%20Reference/java/classio_1_1agora_1_1rtc_1_1_rtc_engine.html
@zhangtao1104 can you give some more details on this, I'm not sure I completely understand.
So we can call setexternalvideosource and set this to false? Then call the pushvideoframe?
I feel like we are close but I'm just trying to understand everything. Thanks again.
By default, when the video is enabled, our sdk capture video frame and transmits it to others, at last ,render the video frame in others screen.
But If you call setexternalvideosource before joinChannel , it means you capture the video frame by yourself, then push the video Frame to our sdk and transfer the video stream to others, at last , reander the video frame in others screen.
@zhangtao1104 I think this makes sense. So would we then capture the screen image every frame, encode to a png, and send the byte data from the encoded png to pushVideoFrame as the video buffer?
I'm using ARFoundation and found this example of how to capture the screen and encode to png
// Copy the camera background to a RenderTexture
Graphics.Blit(null, renderTexture, m_ARCameraBackground.material);
// Copy the RenderTexture from GPU to CPU
var activeRenderTexture = RenderTexture.active;
RenderTexture.active = renderTexture;
if (m_LastCameraTexture == null)
m_LastCameraTexture = new Texture2D(renderTexture.width, renderTexture.height, TextureFormat.RGB24, true);
m_LastCameraTexture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0);
m_LastCameraTexture.Apply();
RenderTexture.active = activeRenderTexture;
// Encode PNG
var bytes = m_LastCameraTexture.EncodeToPNG();
// Send to pushVideoFrame
A little different with what you say.
void Start()
{
mRect = new Rect(0, 0, Screen.width, Screen.height);
mTexture = new Texture2D((int)mRect.width, (int)mRect.height,TextureFormat.RGBA32 ,false);
}
void Update () {
StartCoroutine(cutScreen());
}
IEnumerator cutScreen()
{
yield return new WaitForEndOfFrame();
//videoBytes = Marshal.AllocHGlobal(Screen.width * Screen.height * 4);
mTexture.ReadPixels(mRect, 0, 0);
mTexture.Apply();
Renderer rend = GetComponent<Renderer> ();
rend.material.mainTexture = mTexture;
byte[] bytes = mTexture.GetRawTextureData();
int size = Marshal.SizeOf(bytes[0]) * bytes.Length;
IRtcEngine rtc = IRtcEngine.QueryEngine();
if (rtc != null)
{
int a = rtc.PushVideoFrame((int)mRect.width, (int)mRect.height, bytes);
Debug.Log(" pushVideoFrame = " + a);
}
}
The source code I show you is how to capture screen and push video frame buffer to our sdk.
I will give this a shot! Hardest part is going to be creating the wrapper for the functions I need out of the native sdks. I'm going to give it a try this week. Thanks again for your responses and help so far :)
Only thing I changed in the code provided is I moved the coroutine out of update. I red a few articles that directed me not to put coroutines in update.
I ended up creating a public function that I call when the user joins the chat that starts the coroutine and placed a while(true) around everything in the cutScreen function to keep looping through. Again, recommended from some articles to avoid the coroutine in update.
So at this point I'm able to start a chat between two people and I'm able to capture the screen and just update a UI RawImage on the local users device.
The next step is actually sending that to pushVideoFrame. This is where I'm really getting stuck, creating the plugin to wrap the agora native sdk. Documentation on this process for Unity is really hard to find.
I'm not sure exactly where to start. I've got the code samples provided above, but I'm still scratching my head trying to figure out how to create a plugin that only calls the setExternalVideoSource and pushVideoFrame functions from the native sdks.
I think I see now that I need to include the IMediaEngine and IRTCEngine header files (from the native sdk) in my C++ source file to access the functions I need.
I just need to include a relative path to those headers in my #include and then I should be able to access the functions.
Yes, you are right.You need to include the header files. And when you compile the source code, you need to link agora native lib.
The mk file I uploaded is our compiled script for Android wrapper.
The file named agoraSdkCWrapper.xcodeproj.zip is the compile project for ios plugin ,I already delete our source code,so it is a empty project, you only need to copy your source code into it and link our native libbrary, it will help you to compile the ios plugin, it is a xxx.a file.
If it is difficult for you ,you can send the source code to me ,I will help you to compile the source code.
I will send you my code later today. I've made progress and I am able to reference the classes etc from the header files. Now running into a couple other issues with the following code:
int CAgoraSDKObject::pushVideoFrame(int type, int format, void* videoBuffer, int stride, int height, int cropLeft, int cropTop, int cropRight, int cropBottom, int rotation, long timestamp)
{ //CAgoraSDKObject not defined
if (irtcEngine) //irtcEngine not defined
{
agora::util::AutoPtragora::media::IMediaEngine mediaEngine; //AutoPtragora doesn't appear to be in any of the header files
mediaEngine.queryInterface(irtcEngine, agora::AGORA_IID_MEDIA_ENGINE);
if (mediaEngine)
{
ExternalVideoFrame videoFrame;
videoFrame.type = type;
videoFrame.format = format;
videoFrame.stride = stride;
videoFrame.height = height;
videoFrame.buffer = videoBuffer;
videoFrame.cropLeft = cropLeft;
videoFrame.cropTop = cropTop;
videoFrame.cropBottom = cropBottom;
videoFrame.cropRight = cropRight;
videoFrame.rotation = rotation;
videoFrame.timestamp = timestamp;
return mediaEngine->pushVideoFrame(&videoFrame);;
}
else
{
return -1;
}
}
return NOT_INIT_ENGINE; //NOT_INIT_ENGINE doesn't appear to be anywhere in any of the header files either
}
So I tried changing some things around based on what I'm able to access from the headers but just cant seem to get it.
I did set up the project to include the additional directory that contains the headers, that seems to be working. The other part, including the .lib may be the issue because I don't have that file. I set this to reference the lib folder but that probably won't work
So this is what I've got. If I'm not mistaken, including the header files along with the code we shared above this should work?
AgoraTest.cpp ->
#include "stdafx.h"
#include "AgoraTest.h"
#include "../../AgoraRtcEngineKit.plugin/libs/include/AgoraBase.h"
#include "../../AgoraRtcEngineKit.plugin/libs/include/IAgoraMediaEngine.h"
#include "../../AgoraRtcEngineKit.plugin/libs/include/IAgoraRtcEngine.h"
#include "../../AgoraRtcEngineKit.plugin/libs/include/IAgoraService.h"
int CAgoraSDKObject::pushVideoFrame(int type, int format, void* videoBuffer, int stride, int height, int cropLeft, int cropTop, int cropRight, int cropBottom, int rotation, long timestamp)
{
if (irtcEngine)
{
agora::util::AutoPtragora::media::IMediaEngine mediaEngine;
mediaEngine.queryInterface(irtcEngine, agora::AGORA_IID_MEDIA_ENGINE);
if (mediaEngine)
{
ExternalVideoFrame videoFrame;
videoFrame.type = type;
videoFrame.format = format;
videoFrame.stride = stride;
videoFrame.height = height;
videoFrame.buffer = videoBuffer;
videoFrame.cropLeft = cropLeft;
videoFrame.cropTop = cropTop;
videoFrame.cropBottom = cropBottom;
videoFrame.cropRight = cropRight;
videoFrame.rotation = rotation;
videoFrame.timestamp = timestamp;
return mediaEngine->pushVideoFrame(&videoFrame);;
}
else
{
return -1;
}
}
return NOT_INIT_ENGINE;
}
AgoraTest.h ->
#ifndef AGORATEST_NATIVE_LIB_H
#define AGORATEST_NATIVE_LIB_H
#define DLLExport __declspec(dllexport)
extern "C" {
DLLExport int CAgoraSDKObject::pushVideoFrame(int type, int format, void* videoBuffer, int stride, int height, int cropLeft, int cropTop, int cropRight, int cropBottom, int rotation, long timestamp);
}
#endif
My file structure is like so: Project Root |- Assets | |- Plugins | |- Android | |- AgoraRtcEngineKit.plugin | |- AgoraTest
Hopefully this helps show my relative paths for the headers.
I made it work, you can get in touch if you need the source code.
@ShoziX that would be great! I don't have a way to message you on here but my email is info@arutility.com. If you could help me out it would be more than appreciated.
Thanks!
@ShoziX is there any way you can send me the source code? I'm excited to start working on it :)
For now, there is no way I can modify the texture that is being transmitted over the network. I am building an AR app in unity and I want the feed to be shared after the AR processing is done and If I want to show game objects in camera view, they should also be shared over the network. If I have access to the texture which is getting sent from my device I can handle this on my own but now dll handles everything other than on which gameObject feed gets rendered. Is there anything am I missing? Or any lead through which I can make it work?