noureldien / timeception

Timeception for Complex Action Recognition, CVPR 2019 (Oral Presentation)
https://noureldien.com/research/timeception/
GNU General Public License v3.0
157 stars 33 forks source link

Training test details #16

Closed pzhren closed 4 years ago

pzhren commented 4 years ago

Hi, I have a little doubt about your answer below. Are you sampling the dataset once per epoch? Can you explain the specific process in detail, is there a corresponding code? Can you explain the specific process in detail, is there a corresponding code? Thank you very much.

For the best results, in each epoch, I sample new segments and extract their features using I3D. During testing, I average the scores of 10 random crops. Previous works as Non-Local even test on 30 crops.

Originally posted by @noureldien in https://github.com/noureldien/timeception/issues/15#issuecomment-552368996

noureldien commented 4 years ago

Just to let everybody knows how many of your e-mails I've already replied to. I don't know for how long this will go.



Hi, Can you tell me the sampling strategy you use to sample the video in each epoch in order to get the best results? Looking forward to your reply. Best regard Pengzhen Ren ------------------ Original ------------------ From: "Hussein, Nour"N.M.E.Hussein@uva.nl; Date: Sat, Oct 19, 2019 06:09 PM To: "Pengzhen Ren"pzhren@foxmail.com;

Subject: Re: 回复:RE: 回复:RE: 回复:Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

Hi, test results reported in the paper are average scores of 10 test runs. Did you try this? I don't have pytorch implementation of map.

---- Pengzhen Ren wrote ----

Hi, Before you said that it should be evaluated on the entire test dataset map.After I modified it,but the best result I used for the I3D+3TC test on the charades dataset was 31.8(TC step=32), which was slightly lower than the 33.89 in the paper. I would like to ask, is there a pytorch implementation for calculating the map on the entire test dataset? In order to check the accuracy of my code.


Best regard Pengzhen Ren ------------------ Original ------------------ From: "Hussein, Nour"N.M.E.Hussein@uva.nl; Date: Sat, Sep 28, 2019 01:44 PM To: "Pengzhen Ren"pzhren@foxmail.com;"任鹏真"1006963297@qq.com;

Subject: Re: 回复:RE: 回复:RE: 回复:Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

Aha, that is where the error comes from please only evaluate the map for the entire training/test split. Dont calculate map for the minibatch. Look at the keras example

---- 1006963297 wrote ----

Hi, For video, when sampling, I didn't use video tagging information. I used the entire video tagging information when the data was loaded.The marking information is as follows: https://github.com/noureldien/timeception/blob/master/core/data_utils.py#L173

When calculating map, use the entire training set, and then average the map for each batch, similar to your code: https://github.com/noureldien/timeception/blob/master/experiments/train_pytorch.py#L128

在 2019年9月28日 11:53,"Hussein, Nour" N.M.E.Hussein@uva.nl写道: Hi, For a video, when you sample frames, do you use only the labels of the sampled frames or the label of the entire video?

When calculating map during training, do you use the entire training / test split or you use only the samples of the batch?

Nour

---- Pengzhen Ren wrote ----

Hi Nour, https://github.com/wykang/Charades/blob/adc58b7cfe2567f17cc7b62caf4ff4a13a1e8f22/utils/map.py#L26 I am very sorry to interrupt you again. I met the same situation with other people who reproduce the timeception source code. Multiple iterations of the map value is still nan. I have done the following analysis for this situation.

However, once nan appears in np.mean, the final result is nan, which does not get m_ap. In fact, a small batch on the charades data set is prone to the fact that the real label labels are all zero. Also, I counted the number of non-zero tags on class 157 of the charades dataset. How to deal with this? Counted the number of non-zero tags on class 157 of the charades dataset: A = [] for i in range(157): a = np.sum(data[1][:,i]==1) A += [a] A Out[56]: [613, 606, 452, 291, 258, 79, 612, 27, 781, 857, 46, 710, 200, 42, 239, 1043, 663, 217, 291, 217, 661, 351, 275, 272, 86, 186, 613, 275, 239, 91, 252, 58, 479, 567, 273, 280, 149, 125, 224, 95, 328, 177, 196, 177, 145, 34, 100, 255, 98, 91, 82, 344, 338, 228, 172, 153, 154, 190, 72, 1255, 46, 1048, 472, 646, 46, 453, 29, 397, 151, 160, 438, 201, 318, 199, 104, 137, 378, 171, 158, 160, 132, 470, 224, 72, 131, 28, 59, 219, 189, 52, 87, 39, 302, 94, 110, 37, 442, 1417, 361, 100, 125, 25, 235, 36, 205, 188, 1080, 972, 269, 464, 526, 47, 409, 584, 212, 316, 161, 168, 742, 428, 357, 106, 159, 480, 165, 393, 298, 464, 249, 152, 203, 46, 339, 95, 257, 312, 35, 208, 55, 81, 31, 496, 189, 243, 160, 236, 245, 316, 301, 584, 355, 951, 1011, 633, 1326, 366, 927]

Best regard Pengzhen Ren ------------------ Original ------------------ From: "Hussein, Nour"N.M.E.Hussein@uva.nl; Date: Tue, Sep 24, 2019 06:09 PM To: "Pengzhen Ren"pzhren@foxmail.com;

Subject: Re: 回复:RE: 回复:RE: 回复:Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

I used a server of 4 gpus. Different GPUs like nvidia titan x. and nvidia 1080 ti From: Pengzhen Ren pzhren@foxmail.com Sent: 24 September 2019 11:58:38 To: Hussein, Nour Subject: 回复:RE: 回复:RE: 回复:Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

I'd like to ask about the model of gpu you use.

---原始邮件--- 发件人: "Hussein, Nour"N.M.E.Hussein@uva.nl 发送时间: 2019年9月24日 15:06:29 收件人: "Pengzhen Ren"pzhren@foxmail.com; 主题: Re: 回复:RE: 回复:RE: 回复:Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

Hi, I've never experienced nan using PyTorch or Keras.

Nour From: Pengzhen Ren pzhren@foxmail.com Sent: 24 September 2019 08:10:51 To: Hussein, Nour Subject: Re: 回复:RE: 回复:RE: 回复:Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

Hi Nour, Would you like to ask if you have seen this situation?


Best regard Pengzhen Ren ------------------ Original ------------------ From: "Pengzhen Ren"pzhren@foxmail.com; Date: Mon, Sep 23, 2019 10:42 PM To: "Hussein, Nour"N.M.E.Hussein@uva.nl;

Subject: Re: 回复:RE: 回复:RE: 回复:Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

https://github.com/Jiankai-Sun/charades-baseline-pytorch/blob/d34178a216c5333fb193dc731c77ceaad54bb346/utils/map.py#L26 Is the map using this indicator also nan?


Best regard Pengzhen Ren ------------------ Original ------------------ From: "Hussein, Nour"N.M.E.Hussein@uva.nl; Date: Wed, Sep 11, 2019 04:12 PM To: "Pengzhen Ren"pzhren@foxmail.com;

Subject: Re: 回复:RE: 回复:RE: 回复:Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

Hi Pengzhen,

I really don't know. I'm sorry. This evaluation function I used it as it is. I got it from here https://github.com/gsig/charades-algorithms/blob/5036aa3edf93ae653b8fb9504e5205bf3163bef5/pytorch/utils/map.py

Nour From: 1006963297 pzhren@foxmail.com Sent: 11 September 2019 09:59:55 To: Hussein, Nour Subject: 回复:RE: 回复:RE: 回复:Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

Hi Nour,

In the code below to solve the map, the mean cannot be obtained due to the presence of the 'nan' value. How do you deal with it? Thank you very much. Looking forward to your replay.

https://github.com/noureldien/timeception/blob/master/core/metrics.py#L47 https://github.com/noureldien/timeception/blob/master/core/metrics.py#L57

Best regard Pengzhen Ren ------------------ 原始邮件 ------------------ 发件人: "Hussein, Nour"N.M.E.Hussein@uva.nl; 发送时间: 2019年9月7日(星期六) 中午11:46 收件人: "1006963297"pzhren@foxmail.com;

主题: RE: 回复:RE: 回复:Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

MultiThumos i used i3d pretrained on kinetics

---- 1006963297 wrote ----

hi Nour, https://github.com/piergiaj/pytorch-i3d/blob/master/models/rgb_charades.pt

Is this ‘rgb_charades.pt’ used by both data sets charades and MultiTHUMOS as the weight of I3D feature extraction?

Best regard Pengzhen Ren ------------------ 原始邮件 ------------------ 发件人: "Hussein, Nour"N.M.E.Hussein@uva.nl; 发送时间: 2019年9月7日(星期六) 中午11:06 收件人: "1006963297"pzhren@foxmail.com;

主题: RE: 回复:Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

Hi, I did not finetune I3D on MultiThumos. I used the standard one.

Nour

---- 1006963297 wrote ----

Hi Nour,

I did not find the weight of the dataset MultiTHUMOS pre-training on I3D in your code. Can you give me the weight of this pre-training model? Thank you very much. Looking forward to your replay.

Best regard

Pengzhen Ren

------------------ 原始邮件 ------------------

发件人: "1006963297"pzhren@foxmail.com;

发送时间: 2019年9月5日(星期四) 下午5:26

收件人: "Hussein, Nour"N.M.E.Hussein@uva.nl;

主题: Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

Hi,

Thank you very much for your help. Thank you again for your help. Since I just got in touch with the direction of action recognition, I don't know much about the preprocessing of video data. I looked at the official information of the dataset MultiTHUMOS, but still don't know how to use THUMOS to get MultiTHUMOS. I didn't find the relevant content in your code. Can you send me a pre-processing code for data set MultiTHUMOS? Thank you very much. I promise to indicate your help in the post work.

Best regard

Pengzhen Ren

------------------ 原始邮件 ------------------

发件人: "1006963297"pzhren@foxmail.com;

发送时间: 2019年8月29日(星期四) 中午12:58

收件人: "Hussein, Nour"N.M.E.Hussein@uva.nl;

主题: 回复: 回复: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hi,

Thank you very much for your help. I have solved all the problems I have encountered. If someone else has the same problem, you can transfer it to me and I will help you with the answer. Thank you again for your help.

Best regard

Pengzhen Ren

------------------ 原始邮件 ------------------

发件人: "1006963297"pzhren@foxmail.com;

发送时间: 2019年8月28日(星期三) 晚上8:46

收件人: "Hussein, Nour"N.M.E.Hussein@uva.nl;

主题: 回复: 回复: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hi,

I am very sorry to bother you. However, I found the following error during the test.

I think the error occurs in the function of the green highlight. Is the test on your side normal?

------------------ 原始邮件 ------------------

发件人: "Hussein, Nour"N.M.E.Hussein@uva.nl;

发送时间: 2019年8月28日(星期三) 下午5:32

收件人: "1006963297"pzhren@foxmail.com;

主题: Re: 回复: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

class AsyncVideoReaderCharadesForI3DTorchModel(): def init(self, n_threads=20): random.seed(101) np.random.seed(101)

self.is_busy = False self.images = None self.n_channels = 3 self.img_dim = 224

self.n_threads_in_pool = n_threads self.pool = Pool(self.__n_threads_in_pool)

def load_video_frames_in_batch(self, frames_pathes): self.__is_busy = True

n_pathes = len(frames_pathes) idxces = np.arange(0, n_pathes)

parameters passed to the reading function

params = [data_item for data_item in zip(idxces, frames_pathes)]

set list of images before start reading

imgs_shape = (n_pathes, self.img_dim, self.__img_dim, self.n_channels) self.__images = np.zeros(imgs_shape, dtype=np.float32)

start pool of threads

self.__pool.map_async(self.preprocess_img_wrapper, params, callback=self.thread_pool_callback)

def get_images(self): if self.__is_busy: raise Exception('Sorry, you can\'t get images while threads are running!') else: return self.__images

def is_busy(self): return self.__is_busy

def thread_pool_callback(self, args): self.is_busy = False

def preprocess_img_wrapper(self, params): try: self.preprocess_img(params) except Exception as exp: print ('Error in __preprocess_img') print (exp)

def __preprocess_img(self, params):

idx = params[0] path = params[1]

img = cv2.imread(path) img = resize_crop(img) img = img.astype(np.float32)

normalize such that values range from -1 to 1

img /= float(127.5) img -= 1.0

convert from bgr to rgb

img = img[:, :, (2, 1, 0)]

self.__images[idx] = img

def close(self): self.pool.close() self.pool.terminate() From: 1006963297 pzhren@foxmail.com Sent: 28 August 2019 11:22:52 To: Hussein, Nour Subject: 回复: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hi,

It seems that the code below does not have the following function.

https://github.com/noureldien/timeception/blob/master/datasets/charades.py#L756

video_reader_tr = image_utils.AsyncVideoReaderCharadesForI3DTorchModel(n_threads=n_threads)

------------------ 原始邮件 ------------------

发件人: "Hussein, Nour"N.M.E.Hussein@uva.nl;

发送时间: 2019年8月28日(星期三) 下午4:57

收件人: "1006963297"pzhren@foxmail.com;

主题: Re: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Nour From: 1006963297 pzhren@foxmail.com Sent: 28 August 2019 10:51:35 To: Hussein, Nour Subject: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hi,

I did not find this function

https://github.com/noureldien/timeception/blob/master/datasets/charades.py#L756

Pengzhen Ren

------------------ 原始邮件 ------------------

发件人: "Hussein, Nour"N.M.E.Hussein@uva.nl;

发送时间: 2019年8月28日(星期三) 下午2:38

收件人: "1006963297"pzhren@foxmail.com;

主题: Re: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hi,

Can you check these?

https://github.com/noureldien/timeception/blob/master/datasets/charades.py#L311 https://github.com/noureldien/timeception/blob/master/datasets/charades.py#L719

Nour From: 1006963297 pzhren@foxmail.com Sent: 28 August 2019 08:32:55 To: Hussein, Nour Subject: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hello,author:

Thank you very much for your replay. According to your instructions, I have obtained the following documents.

Completed the corresponding frame sampling. However, how can I use I3D https://github.com/piergiaj/pytorch-i3d to complete the feature extraction? I used pytorch-i3d\extract_features.py in the I3D file for feature extraction. But in reality, this gets the characteristics of all video frames. Is there a corresponding file you have not uploaded? I want to quickly reproduce your work. I will be grateful if you can help.

Looking forward to your replay.


Best Regard

Pengzhen Ren

------------------ 原始邮件 ------------------

发件人: "Hussein, Nour"N.M.E.Hussein@uva.nl;

发送时间: 2019年8月6日(星期二) 下午2:16

收件人: "1006963297"pzhren@foxmail.com;

主题: Re: Code-CVPR2019:Timeception for Complex Action Recognition

Hi Pengzhen,

Thanks for asking. This file 'WZA37.pkl' is the I3D features of the video WZA37.mp4 from Charades dataset.

I extracted features using sampled frames, as discussed in the paper

https://github.com/noureldien/timeception/blob/master/datasets/charades.py#L311

I used this I3D

https://github.com/piergiaj/pytorch-i3d

Nour From: 1006963297 pzhren@foxmail.com Sent: 05 August 2019 16:56:13 To: Hussein, Nour Subject: Code-CVPR2019:Timeception for Complex Action Recognition

Hello, author:

I am a graduate student at Northwestern University, Xi'an, China. I am very interested in your Timeception for Complex Action Recognition paper recently. But when I trained the timeception-only structure in the pytorch environment, I got the following error:

There is really no such file under the dataset file. Please also give us an answer. I would be very grateful if you can help.

Looking forward to hearing from you.​

Best Regard Pengzhen Ren Hi Nour, I did not find the weight of the dataset MultiTHUMOS pre-training on I3D in your code. Can you give me the weight of this pre-training model? Thank you very much. Looking forward to your replay.

Best regard Pengzhen Ren ------------------ 原始邮件 ------------------ 发件人: "1006963297"pzhren@foxmail.com; 发送时间: 2019年9月5日(星期四) 下午5:26 收件人: "Hussein, Nour"N.M.E.Hussein@uva.nl;

主题: Preprocessing of data set MultiTHUMOS: Code-CVPR2019:Timeception for Complex Action Recognition

Hi, Thank you very much for your help. Thank you again for your help. Since I just got in touch with the direction of action recognition, I don't know much about the preprocessing of video data. I looked at the official information of the dataset MultiTHUMOS, but still don't know how to use THUMOS to get MultiTHUMOS. I didn't find the relevant content in your code. Can you send me a pre-processing code for data set MultiTHUMOS? Thank you very much. I promise to indicate your help in the post work.

Best regard Pengzhen Ren ------------------ 原始邮件 ------------------ 发件人: "1006963297"pzhren@foxmail.com; 发送时间: 2019年8月29日(星期四) 中午12:58 收件人: "Hussein, Nour"N.M.E.Hussein@uva.nl;

主题: 回复: 回复: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hi, Thank you very much for your help. I have solved all the problems I have encountered. If someone else has the same problem, you can transfer it to me and I will help you with the answer. Thank you again for your help.

Best regard Pengzhen Ren ------------------ 原始邮件 ------------------ 发件人: "1006963297"pzhren@foxmail.com; 发送时间: 2019年8月28日(星期三) 晚上8:46 收件人: "Hussein, Nour"N.M.E.Hussein@uva.nl;

主题: 回复: 回复: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hi, I am very sorry to bother you. However, I found the following error during the test. I think the error occurs in the function of the green highlight. Is the test on your side normal?

------------------ 原始邮件 ------------------ 发件人: "Hussein, Nour"N.M.E.Hussein@uva.nl; 发送时间: 2019年8月28日(星期三) 下午5:32 收件人: "1006963297"pzhren@foxmail.com;

主题: Re: 回复: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

class AsyncVideoReaderCharadesForI3DTorchModel(): def init(self, n_threads=20): random.seed(101) np.random.seed(101)

    self.__is_busy = False
    self.__images = None
    self.__n_channels = 3
    self.__img_dim = 224

    self.__n_threads_in_pool = n_threads
    self.__pool = Pool(self.__n_threads_in_pool)

def load_video_frames_in_batch(self, frames_pathes):
    self.__is_busy = True

    n_pathes = len(frames_pathes)
    idxces = np.arange(0, n_pathes)

    # parameters passed to the reading function
    params = [data_item for data_item in zip(idxces, frames_pathes)]

    # set list of images before start reading
    imgs_shape = (n_pathes, self.__img_dim, self.__img_dim, self.__n_channels)
    self.__images = np.zeros(imgs_shape, dtype=np.float32)

    # start pool of threads
    self.__pool.map_async(self.__preprocess_img_wrapper, params, callback=self.__thread_pool_callback)

def get_images(self):
    if self.__is_busy:
        raise Exception('Sorry, you can\'t get images while threads are running!')
    else:
        return self.__images

def is_busy(self):
    return self.__is_busy

def __thread_pool_callback(self, args):
    self.__is_busy = False

def __preprocess_img_wrapper(self, params):
    try:
        self.__preprocess_img(params)
    except Exception as exp:
        print ('Error in __preprocess_img')
        print (exp)

def __preprocess_img(self, params):

    idx = params[0]
    path = params[1]

    img = cv2.imread(path)
    img = resize_crop(img)
    img = img.astype(np.float32)
    # normalize such that values range from -1 to 1
    img /= float(127.5)
    img -= 1.0
    # convert from bgr to rgb
    img = img[:, :, (2, 1, 0)]

    self.__images[idx] = img

def close(self):
    self.__pool.close()
    self.__pool.terminate()

From: 1006963297 pzhren@foxmail.com Sent: 28 August 2019 11:22:52 To: Hussein, Nour Subject: 回复: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hi, It seems that the code below does not have the following function. https://github.com/noureldien/timeception/blob/master/datasets/charades.py#L756 video_reader_tr = image_utils.AsyncVideoReaderCharadesForI3DTorchModel(n_threads=n_threads) ------------------ 原始邮件 ------------------ 发件人: "Hussein, Nour"N.M.E.Hussein@uva.nl; 发送时间: 2019年8月28日(星期三) 下午4:57 收件人: "1006963297"pzhren@foxmail.com;

主题: Re: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Nour From: 1006963297 pzhren@foxmail.com Sent: 28 August 2019 10:51:35 To: Hussein, Nour Subject: 回复: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hi, I did not find this function https://github.com/noureldien/timeception/blob/master/datasets/charades.py#L756

Pengzhen Ren ------------------ 原始邮件 ------------------ 发件人: "Hussein, Nour"N.M.E.Hussein@uva.nl; 发送时间: 2019年8月28日(星期三) 下午2:38 收件人: "1006963297"pzhren@foxmail.com;

主题: Re: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hi,

Can you check these? https://github.com/noureldien/timeception/blob/master/datasets/charades.py#L311 https://github.com/noureldien/timeception/blob/master/datasets/charades.py#L719

Nour From: 1006963297 pzhren@foxmail.com Sent: 28 August 2019 08:32:55 To: Hussein, Nour Subject: 回复: Code-CVPR2019:Timeception for Complex Action Recognition

Hello,author: Thank you very much for your replay. According to your instructions, I have obtained the following documents. Completed the corresponding frame sampling. However, how can I use I3D https://github.com/piergiaj/pytorch-i3d to complete the feature extraction? I used pytorch-i3d\extract_features.py in the I3D file for feature extraction. But in reality, this gets the characteristics of all video frames. Is there a corresponding file you have not uploaded? I want to quickly reproduce your work. I will be grateful if you can help. Looking forward to your replay.


Best Regard Pengzhen Ren ------------------ 原始邮件 ------------------ 发件人: "Hussein, Nour"N.M.E.Hussein@uva.nl; 发送时间: 2019年8月6日(星期二) 下午2:16 收件人: "1006963297"pzhren@foxmail.com;

主题: Re: Code-CVPR2019:Timeception for Complex Action Recognition

Hi Pengzhen,

Thanks for asking. This file 'WZA37.pkl' is the I3D features of the video WZA37.mp4 from Charades dataset. I extracted features using sampled frames, as discussed in the paper https://github.com/noureldien/timeception/blob/master/datasets/charades.py#L311

I used this I3D https://github.com/piergiaj/pytorch-i3d

Nour From: 1006963297 pzhren@foxmail.com Sent: 05 August 2019 16:56:13 To: Hussein, Nour Subject: Code-CVPR2019:Timeception for Complex Action Recognition

Hello, author: I am a graduate student at Northwestern University, Xi'an, China. I am very interested in your Timeception for Complex Action Recognition paper recently. But when I trained the timeception-only structure in the pytorch environment, I got the following error:

There is really no such file under the dataset file. Please also give us an answer. I would be very grateful if you can help. Looking forward to hearing from you.​ Best Regard Pengzhen Ren

pzhren commented 4 years ago

I am very sorry for the interruptions.

pzhren commented 4 years ago

Thank you very much for all your explanations for my long-standing question. I am very guilty and sorry for the interruptions.