Closed ahmadadiga closed 7 years ago
Hi, the interpreter vm version is choosen at build time. For being more clear if you load a plugin built for python 3.5 you cannot use it with python 2.7. We are about to release updated binaries with python 2.7 and python 3.6 (in addition to the 'blessed' 3.5), otherwise you need to build the plugin from sources with the version you want. Regarding modules you can install them in the Scripts directory or (for example if you have installed a binary version) directly in the lib/ folder of the UnrealEnginePython plugin.
By the way, remember that technically you can install them wherever you want and simply add the directory of choice to python sys.path.
As an example, you want to install modules in C:/Python35/testlibs then in your ue_site.py just add:
import sys
sys.path.insert(0, 'C:/Python35/testlibs')
Ok many thanks that was very helpful
Hi I have questions
Thanks in advance.
Regarding the second question I could figure it out my self thanks.
Just the first question is remaining please.
For the 'scene frame' you mean getting a 'sreenshot' of the scene as a bitmap in python ?
Yes exactly
You need latest sources (the build procedure is now really easy, check the README):
https://github.com/20tab/UnrealEnginePython/blob/master/docs/Viewport_API.md
an example of script taking screenshot from PIE is here:
https://github.com/20tab/UnrealEnginePython/blob/master/examples/pie_screenshotter.py
while the PIE is running you can call it and the pixels of the viewport will be read and written to a png file saved in the Desktop folder
Thanks that is exactly what I wanted. However, it is quit slow.
Do you think if I implement it in c++ it would be faster?.
You can massively improve it using numpy and avoiding the png encoding. What is your final purpose ? Maybe there are better approaches
I want to use deep Q learning https://cs.stanford.edu/people/karpathy/convnetjs/demo/rldemo.html to make an AI player. It requires a real time screenshot which will be then proceeded by a deep neural network. In the following is my current code:
import unreal_engine as ue
import matplotlib.pyplot as plt
import numpy as np
import pyscreenshot as ImageGrab
import cv2
import os
import png
width, height = ue.editor_get_pie_viewport_size()
pixels = ue.editor_get_pie_viewport_screenshot()
ue.log("{0} {1} {2}".format(width, height, len(pixels)))
class Hero:
global counter
counter=1
global counter2
counter2 = 1
#global cap
#cap = cv2.VideoCapture(0)
# this is called on game start
def begin_play(self):
ue.log('Begin Play on Hero class')
# this is called at every 'tick'
def tick(self, delta_time):
global counter
global counter2
self.pawn = self.uobject.get_owner()
#components = self.pawn.actor_components()
#components = self.uobject.actor_components()
#components = self.pawn.get_class()
#components = self.pawn.actor_has_component_of_typ('unreal_engine.USkeletalMesh')
#skeletal = self.uobject.get_component_by_type('SkeletalMeshComponent')
#scenecapt = self.uobject.get_component_by_type('SceneCaptureComponent2D')
counter=counter+1
counter2 = counter2 + 1
#animation = scenecapt.get_anim_instance()
if counter==10:
# pixels = ue.editor_get_pie_viewport_screenshot()
pixels = ue.editor_get_active_viewport_screenshot()
png_pixels = []
for y in range(0, height):
line = []
for x in range(0, width):
index = y * width + x
pixel = pixels[index]
line.append([pixel.r, pixel.g, pixel.b, pixel.a])
png_pixels.append(line)
#path = os.path.expanduser("~/Users/ahmadhajmosa/Documents/Unreal Projects/DeepMind/UnrealGames/RaceCar1/Plugins/ah3.png")
#png.from_array(png_pixels, 'RGBA').save(path)
#vis = np.zeros((height, width), np.float32)
img=np.asarray(png_pixels)
#cv2.imwrite('color_img2.jpg', img)
cv2.imshow("image", img);
ue.log('image size value: {0}'.format(((img.shape))))
counter=1
self.pawn.bind_axis('MoveForward', self.move_forward)
I have tried to comment the png coding and it is much faster so if editor_get_active_viewport_screenshot() returns directly a list instead of tuple, then i can use directly mg=np.asarray(pixels)
Which is the best format for np.asarray ? A list of bytes ? (but it will break hdr) A list of float, or int ?
I was thinking about adding an optional argument to the screenshot functions for returning raw data instead of tuples of FColor
A list of int would be fine. I have tried to replace the tuple appending loops with a direct assignment to a numpy array as you say down. It is like two times faster but still making the game hang. So the main reason os the delay is the two loops. If we can get directly a integer numpy array that would be great.
Thanks
png_pixels= np.zeros((height,width,4))
for y in range(0, height):
line = []
for x in range(0, width):
index = y * width + x
pixel = pixels[index]
png_pixels[y,x,0]=pixel.r
png_pixels[y, x, 1] = pixel.g
png_pixels[y, x, 2] = pixel.b
png_pixels[y, x, 3] = pixel.a>
cv2.imwrite('color_img6.jpg', png_pixels)
Check latest commit, you can now pass a boolean to the screenshot functions for forcing a int tuple as the return value
Thank you for your cooperation
in the following link is the first results of Self Driving car.
This is really cool. Would you be willing to share what you did on a blog post or repository somewhere @ahmadadiga ?
Thanks,
Yes sure I will share it very soon.
Hello guys,
I been trying to accomplish similar idea, take a few screenshots per second, but the best I could do for now still take about 500ms in my machines, with this code:
import unreal_engine as ue
import os, time, cv2
import numpy as np
path = os.path.expanduser("~/Desktop/pie_screenshot.jpg")
start=time.time()
width, height = ue.editor_get_pie_viewport_size()
pixels = ue.editor_get_active_viewport_screenshot(True)
png_pixels= np.asarray(pixels).reshape(height,width,4)
cv2.imwrite(path, png_pixels)
end=time.time()
ue.log("Took {0} to generate the screenshot({1}x{2})".format(end-start,width,height))
#>>LogPython: Took 0.48443007469177246 to generate the screenshot(1311x686)
@ahmadadiga could you share some of your findings? What kind of performance are you getting with your implementation, can you do more then 2,3 fps per second?
Hello
What I have done is down-sampling the image (to 80803) Using OpenCV C++ then send it to python.
In this way I yes I could process more than 14 frames per seconds
Here is the part that I have changed in the Python plugin:
`PyObject py_unreal_engine_editor_get_pie_viewport_screenshot(PyObject self, PyObject * args) {
FViewport *viewport = GEditor->GetPIEViewport();
if (!viewport) {
Py_INCREF(Py_None);
return Py_None;
}
/*PyObject *py_bool = nullptr;
bool as_int_list = false;
if (!PyArg_ParseTuple(args, "|O:editor_get_pie_viewport_screenshot", &py_bool)) {
return NULL;
}
if (py_bool && PyObject_IsTrue(py_bool))
as_int_list = true;*/
TArray
bool success = GetViewportScreenShot(viewport, sceneData); if(success) {
FIntPoint SceneSize = viewport->GetSizeXY();
cv::Mat M(SceneSize.Y,SceneSize.X, CV_8UC(3), cv::Scalar(0,0,255));
cv::Mat* Sceneframe;
cv::Mat* SceneScaledframe;
Sceneframe = new cv::Mat();
SceneScaledframe = new cv::Mat();
cv::Mat M2(80,80, CV_8UC(3), cv::Scalar(0,0,255));
M.copyTo(*Sceneframe);
M2.copyTo(*SceneScaledframe);
for (int y = 0; y < SceneSize.Y; y++)
{
for (int x = 0; x < SceneSize.X; x++)
{
int i = x + (y * SceneSize.X);
Sceneframe->data[i * 3 + 0]=sceneData[i].B;
Sceneframe->data[i * 3 + 1]=sceneData[i].G;
Sceneframe->data[i * 3 + 2]=sceneData[i].R;
}
}
//cv::cvtColor( *Sceneframe, src_gray, cv::COLOR_BGR2GRAY );
cv::resize(*Sceneframe, *SceneScaledframe, SceneScaledframe->size(), 0, 0, cv::INTER_LINEAR);
try
{
for (int y = 0; y < 80; y++)
{
for (int x = 0; x < 80; x++)
{
int i = x + (y * 80);
bitmap[i].B = SceneScaledframe->data[i * 3 + 0];
bitmap[i].G = SceneScaledframe->data[i * 3 + 1];
bitmap[i].R = SceneScaledframe->data[i * 3 + 2];
}
}
}
catch (int e)
{
}
//PyList_New(Py_ssize_t size)
//PyTuple_New(<#Py_ssize_t size#>)
PyObject *bitmap_tuple = PyTuple_New(bitmap.Num() * 3);
for (int i = 0; i < bitmap.Num(); i++) {
PyTuple_SetItem(bitmap_tuple, i * 3, PyLong_FromLong(bitmap[i].R));
PyTuple_SetItem(bitmap_tuple, i * 3 + 1, PyLong_FromLong(bitmap[i].G));
PyTuple_SetItem(bitmap_tuple, i * 3 + 2, PyLong_FromLong(bitmap[i].B));
//PyTuple_SetItem(bitmap_tuple, i * 4 + 3, PyLong_FromLong(bitmap[i].A));
}
//FMemory::Free(Sceneframe);
//FMemory::Free(SceneScaledframe);
Sceneframe->release();
SceneScaledframe->release();
M.release();
M2.release();
//bitmap.
//sceneData.release();
return bitmap_tuple;
}
else{
Py_INCREF(Py_None);
return Py_None;}
//PyObject *bitmap_tuple = PyTuple_New(bitmap.Num());
//for (int i = 0; i < bitmap.Num(); i++) {
// PyTuple_SetItem(bitmap_tuple, i, py_ue_new_fcolor(bitmap[i]));
//}
//return bitmap_tuple;
}`
@ahmadadiga could you share how you were able to control the vehicle from python? Been trying to get that work recently. Thanks!
Hey you need to use the instructions addressed in the example (" Adding a python component to an Actor").
hi,i have a problem,i cant import numpy,
LogPython:Error: DLL load failed:
LogPython:Error: Traceback (most recent call last):
LogPython:Error: File "H:/AutomaticSystem/Content/Scripts\PythonTest.py", line 10, in
any idea?
Hi, check numpy is a 64bit dll
Hey
How can I import python libraries to the game like tensorflow
Is their a way to change the python interpreter?
Regards