awslabs / amazon-kinesis-video-streams-producer-c

https://awslabs.github.io/amazon-kinesis-video-streams-producer-c/group__PublicMemberFunctions.html
Apache License 2.0
56 stars 72 forks source link

[StreamEvent.c:263] Content type returned from the DescribeStream call doesn't match the one specified in the StreamInfo #138

Closed yibin-lin closed 3 years ago

yibin-lin commented 3 years ago

Hi, I want to push the video stream and audio stream, but actually encountered some problems. For example, the following picture shows that I created the video and audio stream channel information, but the content returned by the server shows only the video information. The process I call refers to the example of KvsAacAudioVideoStreamingSample.c. Conversely, if you refer to the example KvsVideoOnlyStreamingSample.c, it is normal to only push video streams. This problem has bothered me for a while, I hope you can help me, thank you.

3122f1c69958fbf37059f78d51bf3ee

The debug print on my device is as follows: -16 10:59:01.584][ffkvsc ffkvsc(1576)][I][ ffkvs][Client.c:496] Creating Kinesis Video Stream. [2020-11-16 10:59:01.591][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3370] Kinesis Video Stream Info [2020-11-16 10:59:01.598][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3371] Stream name: Cam_Testing_02 [2020-11-16 10:59:01.608][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3372] Streaming type: STREAMING_TYPE_REALTIME [2020-11-16 10:59:01.616][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3373] Content type: video/h264,audio/aac [2020-11-16 10:59:01.623][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3374] Max latency (100ns): 1020000000 [2020-11-16 10:59:01.630][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3375] Fragment duration (100ns): 20000000 [2020-11-16 10:59:01.638][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3376] Key frame fragmentation: Yes [2020-11-16 10:59:01.647][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3377] Use frame timecode: Yes [2020-11-16 10:59:01.659][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3378] Absolute frame timecode: No [2020-11-16 10:59:01.667][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3379] Nal adaptation flags: 40 [2020-11-16 10:59:01.674][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3380] Average bandwith (bps): 2097152 [2020-11-16 10:59:01.681][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3381] Framerate: 120 [2020-11-16 10:59:01.690][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3382] Buffer duration (100ns): 1200000000 [2020-11-16 10:59:01.698][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3383] Replay duration (100ns): 600000000 [2020-11-16 10:59:01.705][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3384] Connection Staleness duration (100ns): 50000000 [2020-11-16 10:59:01.713][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3385] Store Pressure Policy: 1 [2020-11-16 10:59:01.720][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3386] View Overflow Policy: 1 [2020-11-16 10:59:01.729][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3392] Segment UUID: NULL [2020-11-16 10:59:01.737][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3400] Frame ordering mode: 3 [2020-11-16 10:59:01.745][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3401] Track list [2020-11-16 10:59:01.752][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3404] Track id: 1 [2020-11-16 10:59:01.759][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3405] Track name: kvs_video_track [2020-11-16 10:59:01.769][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3406] Codec id: V_MPEG4/ISO/AVC [2020-11-16 10:59:01.777][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3407] Track type: TRACK_INFO_TYPE_VIDEO [2020-11-16 10:59:01.790][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3416] Track cpd: NULL [2020-11-16 10:59:01.797][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3404] Track id: 2 [2020-11-16 10:59:01.804][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3405] Track name: kvs_audio_track [2020-11-16 10:59:01.814][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3406] Codec id: A_AAC [2020-11-16 10:59:01.822][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3407] Track type: TRACK_INFO_TYPE_AUDIO [2020-11-16 10:59:01.830][ffkvsc ffkvsc(1576)][W][ ffkvs][Stream.c:3412] Track cpd: 1590 2020-11-16 02:59:01 DEBUG stepStateMachine(): State Machine - Current state: 0x0000000000000001, Next state: 0x0000000000000002 [2020-11-16 10:59:07.046][ffkvsc ffkvsc(1576)][W][ ffkvs][Response.c:437] curl perform failed for url https://kinesisvideo.us-west-2.amazonaws.com/describeStream with result Timeout was reached: Resolving timed out after 5000 milliseconds [2020-11-16 10:59:07.055][ffkvsc ffkvsc(1576)][W][ ffkvs][Response.c:459] HTTP Error 0 : Response: (null) Request URL: https://kinesisvideo.us-west-2.amazonaws.com/describeStream Request Headers: Authorization: AWS4-HMAC-SHA256 Credential=AKIAV3KC4YXBNQMIALFB/20201116/us-west-2/kinesisvideo/aws4_request, SignedHeaders=host;user-agent;x-amz-date, Signature=14ec1835107365e7dab572f0eaab969d804930acb7a5f1d777e34403f594d146 content-length: 35 content-type: application/json host: kinesisvideo.us-west-2.amazonaws.com user-agent: AWS-SDK-KVS/3.0.0 GCC/4.8.3 Linux/3.4.35 armv7l X-Amz-Date: 20201116T025901Z 2020-11-16 02:59:07 DEBUG describeStreamCurlHandler(): DescribeStream API response: [2020-11-16 10:59:07.068][ffkvsc ffkvsc(1576)][I][ ffkvs][Client.c:933] Describe stream result event. [2020-11-16 10:59:08.533][ffkvsc ffkvsc(1576)][I][ ffkvs][Response.c:543] RequestId: 99de393a-17eb-482b-9542-aa571f69ed97 2020-11-16 02:59:08 DEBUG describeStreamCurlHandler(): DescribeStream API response: {"StreamInfo":{"CreationTime":1.602654966513E9,"DataRetentionInHours":2,"DeviceName":"JY9757ACX1EVE7NM","IngestionConfiguration":null,"KmsKeyId":"arn:aws:kms:us-west-2:402256610754:alias/aws/kinesisvideo","MediaType":"video/h264","Status":"ACTIVE","StreamARN":"arn:aws:kinesisvideo:us-west-2:402256610754:stream/Cam_Testing_02/1602654966513","StreamName":"Cam_Testing_02","Version":"G4NBgwzAz9KHlXDFQC8h"}} [2020-11-16 10:59:08.551][ffkvsc ffkvsc(1576)][I][ ffkvs][Client.c:933] Describe stream result event. [2020-11-16 10:59:08.559][ffkvsc ffkvsc(1576)][I][ PRINT][StreamEvent.c:260] yibin test pKinesisVideoStream->streamInfo.streamCaps.contentType:video/h264,audio/aac

[2020-11-16 10:59:08.570][ffkvsc ffkvsc(1576)][I][ PRINT][StreamEvent.c:261] yibin test streamDescription.contentType:video/h264

[2020-11-16 10:59:08.577][ffkvsc ffkvsc(1576)][W][ ffkvs][StreamEvent.c:263] Content type returned from the DescribeStream call doesn't match the one specified in the StreamInfo

.......... The final print push is not successful and the print is as follows: [2020-11-16 10:59:17.487][ffkvsc KVS-Audio(1585)][W][ ffkvs][ContinuousRetryStreamCallbacks.c:292] Reporting stream error. Errored timecode: 400000 Status: 0x7dac05200007d [2020-11-16 10:59:17.496][ffkvsc KVS-Video(1584)][W][ ffkvs][Client.c:680] Failed to submit frame to Kinesis Video client. status: 0x52000085 decoding timestamp: 28400000 presentation timestamp: 28400000 2020-11-16 02:59:17 DEBUG defaultStreamErrorReportCallback(): Reported streamError callback for stream handle 5009536. Upload handle 2. Fragment timecode in 100ns: 400000. Error status: 0x5200007d 2020-11-16 02:59:17 DEBUG stepStateMachine(): State Machine - Current state: 0x0000000000000100, Next state: 0x0000000000000200 [2020-11-16 10:59:17.504][ffkvsc KVS-Video(1584)][N][ KVSC][ffkvsc_core.c:172] putKinesisVideoFrame failed with 0x52000085

[2020-11-16 10:59:17.531][ffkvsc KVS-Video(1584)][W][ ffkvs][Client.c:680] Failed to submit frame to Kinesis Video client. status: 0x52000085 decoding timestamp: 28400000 presentation timestamp: 28400000

MushMal commented 3 years ago

@yibin-lin Few pointers below:

I am guessing that you are defining audio/video multi-track stream but you are not producing video frames. Please check your configuration and the media pipeline. Ensure you are producing both audio and video frames.

Please resolve this issue if you have further questions please cut separate issues per the bullet points I mentioned.

yibin-lin commented 3 years ago

hi, MushMal, thank you for answering the question. I am using an IP Camera, the video stream is encoded with h264, and the audio is encoded with aac. According to your suggestion, I have taken into account the configuration of my stream In this case, the frame rate of the video stream is 25 frames, so the duration time is correct, such as SAMPLE_VIDEO_FRAME_DURATION, and the sampling rate of my audio stream is 8K, so I changed some parameters accordingly, such as SAMPLE_AUDIO_FRAME_DURATION and AUDIO_TRACK_SAMPLING_RATE. But the current result is still wrong. I compared the code written by myself and the code written by the example KvsAacAudioVideoStreamingSample.c several times, but I didn’t find my problem. Can you help me see if there is something wrong with the code I wrote? Our company's IP Camera needs to be connected to Amazon Cloud. Thank you very much.

include <com/amazonaws/kinesis/video/cproducer/Include.h>

include "ffkvsc_module.h"

include "ffavs_audio.h"

ifdef __cplusplus

extern "C" {

endif

define DEFAULT_RETENTION_PERIOD 2 * HUNDREDS_OF_NANOS_IN_AN_HOUR

define DEFAULT_BUFFER_DURATION 120 * HUNDREDS_OF_NANOS_IN_A_SECOND

define DEFAULT_CALLBACK_CHAIN_COUNT 5

define DEFAULT_KEY_FRAME_INTERVAL 50

define DEFAULT_FPS_VALUE 25

define DEFAULT_STREAM_DURATION 20 * HUNDREDS_OF_NANOS_IN_A_SECOND

define DEFAULT_STORAGE_SIZE 16 1024 1024

define RECORDED_FRAME_AVG_BITRATE_BIT_PS 3800000

define SAMPLE_AUDIO_FRAME_DURATION (128 * HUNDREDS_OF_NANOS_IN_A_MILLISECOND) //1024/8000

define SAMPLE_VIDEO_FRAME_DURATION (HUNDREDS_OF_NANOS_IN_A_SECOND / DEFAULT_FPS_VALUE)

define AUDIO_TRACK_SAMPLING_RATE 8000

define AUDIO_TRACK_CHANNEL_CONFIG 2

define FILE_LOGGING_BUFFER_SIZE (100 * 1024)

define MAX_NUMBER_OF_LOG_FILES 5

define PUT_STREAM_VIDEO_AUDIO

static ffss_thread_t gs_kvsc_video_task; static ffss_thread_t gs_kvsc_audio_task; static ffstream_t gs_kvsc_video_stream; static ffstream_t gs_kvsc_audio_stream; bool gs_brun = false;

typedef struct { PBYTE buffer; UINT32 size; } FrameData, *PFrameData;

typedef struct { ATOMIC_BOOL firstVideoFramePut; UINT64 streamStopTime; UINT64 streamStartTime; STREAM_HANDLE streamHandle; CLIENT_HANDLE clientHandle;

} SampleCustomData, *PSampleCustomData;

SampleCustomData g_data;

// Forward declaration of the default thread sleep function VOID defaultThreadSleep(UINT64);

STATUS ffkvsc_read(PFrame pFrame, ffstream_t pHndl) { frame_info_t vinfo = NULL; STATUS retStatus = STATUS_SUCCESS;

CHK(pFrame != NULL, STATUS_NULL_ARG);
CHK(pHndl != NULL, STATUS_NULL_ARG);

// Get the size and read into frame
if(ffstream_read(pHndl, &vinfo) != 0)
{
    return STATUS_READ_FILE_FAILED;
}
pFrame->size = vinfo->u32FrmSize;
pFrame->frameData = (PBYTE)FFSS_DATA_NEXT(vinfo);
if(UNLIKELY(vinfo->stVideo.bIframe))
{
    pFrame->flags = FRAME_FLAG_KEY_FRAME;
}
else
{
    pFrame->flags = FRAME_FLAG_NONE;
}
if (pFrame->flags == FRAME_FLAG_KEY_FRAME) 
{
    //kvs_inf("Key frame size %" PRIu64, pFrame->size);
}

CleanUp: return retStatus; }

PVOID putVideoFrameRoutine(PVOID args) { ffss_thread_set_name("KVS-Video"); STATUS retStatus = STATUS_SUCCESS; Frame frame; STATUS status; UINT32 frameIndex = 0; int result;

if(ffstream_open(&gs_kvsc_video_stream, SHM_STREAM_1_KEY) != 0)
{
    kvs_err("video stream open failed!!!");
    retStatus = STATUS_OPEN_FILE_FAILED;
    goto CleanUp;
}

frame.version = FRAME_CURRENT_VERSION;
frame.trackId = DEFAULT_VIDEO_TRACK_ID;
frame.duration = 0;
frame.decodingTs = 0;
frame.presentationTs = 0;
frame.index = 0;

G_ipcData->cloud.login_state = 1;  // login success,  send frame.
ffstream_seek_Iframe(&gs_kvsc_video_stream, -1);

while(gs_brun) 
{
    frame.index = frameIndex;
    frame.flags = FRAME_FLAG_KEY_FRAME;

result = ffkvsc_read(&frame, &gs_kvsc_video_stream);
if(result == STATUS_READ_FILE_FAILED)
    {
        // synchronize putKinesisVideoFrame to running time
        usleep(20*1000); // 10ms
        continue;
    }
else if(result != STATUS_SUCCESS)
    {
        kvs_err("ffkvsc_read result != STATUS_SUCCESS!!!");
        goto CleanUp;
    }

    status = putKinesisVideoFrame(g_data.streamHandle, &frame);
    if(STATUS_FAILED(status))
    {
        kvs_wrn("putKinesisVideoFrame failed with 0x%08x\n", status);
        status = STATUS_SUCCESS;
        continue;
    }
    else
    {
        kvs_inf("putKinesisVideoFrame successs.");
    }
    ATOMIC_STORE_BOOL(&g_data.firstVideoFramePut, TRUE);

    frame.presentationTs += SAMPLE_VIDEO_FRAME_DURATION;
    frame.decodingTs = frame.presentationTs;
    frameIndex++;
}

CHK_STATUS(stopKinesisVideoStreamSync(g_data.streamHandle));
CHK_STATUS(freeKinesisVideoStream(&(g_data.streamHandle)));
CHK_STATUS(freeKinesisVideoClient(&(g_data.clientHandle)));

CleanUp: G_ipcData->cloud.login_state = 0; if(retStatus != STATUS_SUCCESS) { kvs_wrn("CleanUp putVideoFrameRoutine failed with 0x%08x", retStatus); }

return (PVOID) (uintptr_t*) retStatus;

}

PVOID putAudioFrameRoutine(PVOID args) { ffss_thread_set_name("KVS-Audio"); STATUS retStatus = STATUS_SUCCESS; Frame frame; STATUS status; int result;

if(ffstream_open(&gs_kvsc_audio_stream, SHM_AUDIO_CAPTURE_KEY) != 0)
{
    kvs_err("audio stream open failed!!!");
    retStatus = STATUS_OPEN_FILE_FAILED;
    goto CleanUp;
}

frame.version = FRAME_CURRENT_VERSION;
frame.trackId = DEFAULT_AUDIO_TRACK_ID;
frame.duration = 0;
frame.decodingTs = 0; // relative time mode
frame.presentationTs = 0; // relative time mode
frame.index = 0;
frame.flags = FRAME_FLAG_NONE; // audio track is not used to cut fragment

while(gs_brun) 
{
    result = ffkvsc_read(&frame, &gs_kvsc_audio_stream);
if(result == STATUS_READ_FILE_FAILED)
    {
        // synchronize putKinesisVideoFrame to running time
        kvs_wrn("ffkvsc_read failed audio");
        usleep(20*1000); // 10ms
        continue;
    }
else if(result != STATUS_SUCCESS)
    {
        kvs_err("ffkvsc_read result != STATUS_SUCCESS!!!");
        goto CleanUp;
    }

    // no audio can be put until first video frame is put
    if(ATOMIC_LOAD_BOOL(&g_data.firstVideoFramePut))
    {           
        status = putKinesisVideoFrame(g_data.streamHandle, &frame);
        if(STATUS_FAILED(status))
        {
            kvs_wrn("audio putKinesisVideoFrame failed with 0x%08x\n", status);
            status = STATUS_SUCCESS;
        }
        else
        {
            kvs_inf("audio putKinesisVideoFrame success.");
        }

        frame.presentationTs += SAMPLE_AUDIO_FRAME_DURATION;
        frame.decodingTs = frame.presentationTs;
        frame.index++; 
    }  
}

CleanUp: if (retStatus != STATUS_SUCCESS) { kvs_wrn("putAudioFrameRoutine failed with 0x%08x", retStatus); }

return (PVOID) (uintptr_t*) retStatus;

}

static void kvsc_monitor_keepalive(void param) { if(!G_ipcData->cloud.enable || G_ipcData->cloud.type != CLOUD_TYPE_AWS) { return; } if(UNLIKELY(gs_brun == false)) { char mod[] = {MOD_NAME_KVSC}; ffss_module_reload_special(MOD_NAME_KVSC, mod, 1); } } TIMER_SECOND(kpa, 10, kvsc_monitor_keepalive, NULL);

static int load_kvsc(void) { PDeviceInfo pDeviceInfo = NULL; PStreamInfo pStreamInfo = NULL; PClientCallbacks pClientCallbacks = NULL; PStreamCallbacks pStreamCallbacks = NULL; CLIENT_HANDLE clientHandle = INVALID_CLIENT_HANDLE_VALUE; STREAM_HANDLE streamHandle = INVALID_STREAM_HANDLE_VALUE; STATUS retStatus = STATUS_SUCCESS; PCHAR accessKey = NULL, secretKey = NULL, sessionToken = NULL; PCHAR streamName = NULL, region = NULL, cacertPath = NULL;

ifdef PUT_STREAM_VIDEO_AUDIO

PTrackInfo pAudioTrack = NULL;
BYTE audioCpd[KVS_AAC_CPD_SIZE_BYTE];
#endif
UINT64 streamStopTime, streamingDuration = DEFAULT_STREAM_DURATION;

memset(&g_data, 0, sizeof(SampleCustomData));

G_ipcData->cloud.login_state = 0; // default not login.

accessKey    = AWS_ACESS_KEY_ID;
secretKey    = AWS_SECRET_ACESS_KEY;
cacertPath   = "/etc/ssl/SFSRootCAG2.pem";
sessionToken = NULL;
streamName   = STREAM_NAME;
region       = (PCHAR) DEFAULT_AWS_REGION;

streamStopTime = defaultGetTime() + streamingDuration;
//create dev info
CHK_STATUS(createDefaultDeviceInfo(&pDeviceInfo));
pDeviceInfo->clientInfo.loggerLogLevel = LOG_LEVEL_DEBUG;
pDeviceInfo->storageInfo.storageSize = DEFAULT_STORAGE_SIZE;

//create streaiminfo 
CHK_STATUS(createRealtimeAudioVideoStreamInfoProvider(streamName, DEFAULT_RETENTION_PERIOD, DEFAULT_BUFFER_DURATION, &pStreamInfo));

// set up audio cpd.
pAudioTrack = pStreamInfo->streamCaps.trackInfoList[0].trackId == DEFAULT_AUDIO_TRACK_ID ?
              &pStreamInfo->streamCaps.trackInfoList[0] :
              &pStreamInfo->streamCaps.trackInfoList[1];

// generate audio cpd
pAudioTrack->codecPrivateData = audioCpd;
pAudioTrack->codecPrivateDataSize = KVS_AAC_CPD_SIZE_BYTE;
CHK_STATUS(mkvgenGenerateAacCpd(AAC_LC, AUDIO_TRACK_SAMPLING_RATE, AUDIO_TRACK_CHANNEL_CONFIG, pAudioTrack->codecPrivateData, pAudioTrack->codecPrivateDataSize));

// use relative time mode. Buffer timestamps start from 0
pStreamInfo->streamCaps.absoluteFragmentTimes = FALSE;    

// adjust members of pStreamInfo here if needed
CHK_STATUS(createDefaultCallbacksProviderWithAwsCredentials(accessKey,
                                                            secretKey,
                                                            sessionToken,
                                                            MAX_UINT64,
                                                            region,
                                                            cacertPath,
                                                            NULL,
                                                            NULL,
                                                            &pClientCallbacks));

if(NULL != getenv(ENABLE_FILE_LOGGING)) 
{
    if((retStatus = addFileLoggerPlatformCallbacksProvider(pClientCallbacks,
                                                           FILE_LOGGING_BUFFER_SIZE,
                                                           MAX_NUMBER_OF_LOG_FILES,
                                                           (PCHAR) FILE_LOGGER_LOG_FILE_DIRECTORY_PATH,
                                                           TRUE) != STATUS_SUCCESS)) {
        kvs_err("File logging enable option failed with 0x%08x error code\n", retStatus);
    }
}

CHK_STATUS(createStreamCallbacks(&pStreamCallbacks));
CHK_STATUS(addStreamCallbacks(pClientCallbacks, pStreamCallbacks));
CHK_STATUS(createKinesisVideoClient(pDeviceInfo, pClientCallbacks, &clientHandle));
CHK_STATUS(createKinesisVideoStreamSync(clientHandle, pStreamInfo, &streamHandle));

g_data.streamStopTime = streamStopTime;
g_data.streamHandle = streamHandle;
g_data.clientHandle = clientHandle;
g_data.streamStartTime = defaultGetTime();
ATOMIC_STORE_BOOL(&g_data.firstVideoFramePut, FALSE);

gs_brun = true;
if(ffss_thread_create_join(&gs_kvsc_video_task, putVideoFrameRoutine, NULL) != FFSS_OK)
{
    kvs_err("ffss_thread_create_join putVideoFrameRoutine failed!!!");
}

if(ffss_thread_create_join(&gs_kvsc_audio_task, putAudioFrameRoutine, NULL) != FFSS_OK)
{
    kvs_err("ffss_thread_create_join putAudioFrameRoutine failed!!!");
}
return 0;

CleanUp: G_ipcData->cloud.login_state = 0; if (STATUS_FAILED(retStatus)) { G_ipcData->cloud.login_state = 2; kvs_err("Failed with status 0x%08x\n", retStatus); } if (LIKELY(pDeviceInfo != NULL)) { freeDeviceInfo(&pDeviceInfo); } if (LIKELY(pStreamInfo != NULL)) { freeStreamInfoProvider(&pStreamInfo); } if (IS_VALID_STREAM_HANDLE(streamHandle)) { freeKinesisVideoStream(&streamHandle); } if (IS_VALID_CLIENT_HANDLE(clientHandle)) { freeKinesisVideoClient(&clientHandle); } if (LIKELY(pClientCallbacks != NULL)) { freeCallbacksProvider(&pClientCallbacks); }
return -1; }

static int unload_kvsc(void) {
gs_brun = false; G_ipcData->cloud.login_state = 0;

ffss_thread_delete(&gs_kvsc_audio_task);
ffss_thread_delete(&gs_kvsc_video_task);

ffstream_close(&gs_kvsc_audio_stream);
ffstream_close(&gs_kvsc_video_stream);

return 0;

}

FFMOD_INFO(MOD_NAME_KVSC, MOD_PRI_KVSC, load_kvsc, unload_kvsc);

ifdef __cplusplus

}

endif`

MushMal commented 3 years ago

I took a very quick look and in general, I have to guess a few of the things in your code.

A couple of general notes:

ffkvsc_read

Where is the buffer stored: pFrame->frameData = (PBYTE)FFSS_DATA_NEXT(vinfo); Does this set the right flags for audio - i.e. for audio we should not set any flags actually.

// no audio can be put until first video frame is put You can actually start putting non-key-frame frames and those will simply be skipped, however, it's a good idea to start audio frames after video is produced

This said, I think you should debug your pipeline and ensure that the actual file is being created with the right flags for video and audio. Try to put some instrumentation (i.e. logging) and make sure you don't get errors while putting frames and that you are getting frames for both of the pipelines in the first place.

Also, think if your media pipeline could actually push the frames in instead of spinning up audio and video thread like in the sample. Most media pipelines have their own liveness model where they would push the frames when they are ready rather than the end element querying for the readiness. For example, Android MediaCodec did not have the push model and the app needed to query the status in a loop. Later versions added the push model where you register a delegate.

Sorry that I couldn't give more answers but I think you are on the right path. Try to dump out the data and it would give you clues. Also, one other thing you could try is to have a video only initially. Just create a single track stream and only produce video - make sure it has the right fragmentation, etc..

yibin-lin commented 3 years ago

Hi, MushMal, I changed my code according to your suggestion, including setting the audio flag to none and so on. But the result was still unsuccessful, so I ran the example of KvsAacAudioVideoStreamingSample.c in IP Camera. As a result, the operation of the example was as successful as expected, so I saved each frame of audio after AAC encoding on the IP Camera into a file to replace the audio file in the aacSampleFrames folder. At the same time, I changed the following code #define SAMPLE_AUDIO_FRAME_DURATION (128 * HUNDREDS_OF_NANOS_IN_A_MILLISECOND) and #define AUDIO_TRACK_SAMPLING_RATE 8000, because my audio sampling rate is 8K. But the demo after replacing my file has the same problem. Could it be that there is a problem with my audio data or the modified code? I save the audio of the IP Camera as an aac file, which can be played with a player.

Could I trouble you, if you can provide an example of playing audio files with a sampling rate of 8K, encoded as AAC, or PCM-u or PCM-A, it will help us develop easier. All of our IP Cameras support audio coding in the above formats. thank you very much.

image

image

image

MushMal commented 3 years ago

I really don't understand what it says on the screen. Check out the debug logs and see if it can give you any clues. Translate the screen so people might be able to help

MushMal commented 3 years ago

This issue is stale. Please provide further information or close this issue