foreshadow / atom-python-run

A simple atom package. Press one key to run your python code in atom.
https://atom.io/packages/atom-python-run
MIT License
44 stars 32 forks source link

Proposal for Cross-Platform Support #31

Closed ghost closed 7 years ago

ghost commented 7 years ago

Personal Comment I've been looking at the source code for awhile and after breaking it down to it's core, I see no reason why this application shouldn't be Cross Platform Compatible. The use of the cp.exe is redundant and unnecessary.

Rationale In the atom-python-run.js source file, you use 2 key modules

const path = require("path");
const child_process = require("child_process");

The Quick and Dirty Solution would be to utilize the built in process object which could be used as such to determine the type of OS

process.env.PATH[0]
*nix Output: `/`
Windows Output: `x`

This will output either a / for a *nix based environment or a x where x is a drive letter corresponding to the mounted file system in Windows.

cp just adds another layer to the child_process.spawn() method. They essentially behave the same since they both execute a command and create a process.

Considerations About the Program cp You bypass python by chaining the execution process from atom to node to cp to python. The chained process should have been from atom to node to python directly regardless of the OS in use.

The cp executable adds complexity to your simple application and locks it in to one environment without any further consideration of the simple implications that it raises.

Using cp creates a security hole that allows ANY command to be executed in the environment because of the fact that there is no sanitization as the Child Process documentation repeatedly suggests. cp takes ANY set of arguments, concatenates them, and then passes them directly to the system() function without further consideration.

At this point, so the End User never has to worry, you really only require that python and idle be available in the PATH variable which is universal. Syntactically speaking, the only differences are in how the tokens are parsed and delimited.

Windows: c:\some\path\to\some\binary.exe; c:\some\other\binary.exe; ...etc
*nix: /some/path/to/some/binaries: /some/other/binaries: ...etc

Proposal There is no real need or requirement for the cp.cpp file (which is pure C and could be a *.c source file as well). The best solution is to dump it. It creates a complicated set of instructions for the run(command) function which is what actually spawns the python process in the first place. This also solves the cross compilation issues that arise from utilizing cp as a mechanism in your tool chain.

You can use the fact that node.js is built in to Atom to your advantage and create conditional statements for each environment.

process.platform
Windows Output: win32
Mac OS X Output: darwin
Linux output: linux

from there, you can utilize javascripts branching mechanisms to decide how you want to spawn a process. for example, if I want to spawn a python file in Linux, all I have to do is this.

var child = child_process.spawn(ca[0], [
    ca[1]
], {
    cwd: info.dir,
    detached: true
});
child.unref();

This bypasses cp completely and succeeds in executing the child process while unreferencing it so that it is no longer attached to its parent process, Atom. This also allows you to create custom events for specific environments.

Solution To add cross platform compatability, all you have to do is delete the bin directory and the src directory. In the lib directory, edit the atom-python-run.js source file so that it reflects the changes below.

First, change the f5_command and f6_command properties to execute python. There is no need to add the file extension *.exe. The OS will do the right thing when it has to.

// OS doesn't need the file extension for python executable
f5_command: {
    title: "Command of F5",
    description: "{file} stands for current file path",
    type: "string",
    default: "python \"{file}\""
},
f6_command: {
    title: "Command of F6",
    description: "{file} stands for current file path",
    type: "string",
    default: "python \"{file}\""
},

Modify the function run(command) so that it has these statements at the end of the block instead.

var child;

// if Windows OS
if ("win32" === process.platform) {
    child = child_process.spawn("cmd", [
        "/c", "start",  ca[0] + ca[1]
    ], {
        cwd: info.dir,
        detached: true
    });
}
// if Mac OS X or Linux
if ("darwin" === process.platform || "linux" === process.platform) {
    child = child_process.spawn(ca[0], [
        ca[1]
    ], {
        cwd: info.dir,
        detached: true
    });
}

child.unref();

It's simple and it turns out it wasn't all that bad after all.

Any bugs caused by these branches can be reported and easily tweaked so that they behave as expected.

This also creates explicit calls so that only the desired file is executed within the environment and the worry of unsanitised execution is non-existent because cp is no longer necessary which negates any worries of cross compilation as well.

Additional Comments Seeing as the the execution time is calculated within the cp executable and utilizes the DOS command pause, you can simply tell the user to append the built in input() command at the end of the python source file so that the shell stays open. If the End User is interested in timing the execution process, they can do so by implementing a built in method provided by the python library.

Python 2.x.x
    raw_input("press any key to continue...")
Python 3.x.x
    input("press any key to continue...")

This will allow the shell to say open until the user decides to close it.

foreshadow commented 7 years ago

Nice article. The issue of cross-platform has been coming up for a long time. However, I don't have platforms other than Windows, supporting them will be a difficult for me. Your proposal solved the problem that troubled me for a long time. If it is convenient for you, you can make a pull request.

By the way, in consideration to my personal custom (and some of the users'), I would like to keep the cp.exe, and we can make it as an option for Windows users.

And since I don't have platforms other than Windows, it would be nice if you can keep watching the issues and solve the problems of these platforms as well.

ghost commented 7 years ago

If it is convenient for you, you can make a pull request.

I'll see what I can do. I have access to Windows and Linux. I'll see if I can get my hands on a Mac (its been awhile).

Before I make a pull request, I need to do some further testing since Linux has a tendency to create sub shells (a shell with a shell). I've been trying to work around this one.

Fortunately, Mac OS X and Linux both use bash as its default shell. I saw a forked version of this module to run explicitly on Linux. I could check it out a little more in detail to see how the shell is executed.

By the way, in consideration to my personal custom (and some of the users'), I would like to keep the cp.exe, and we can make it as an option for Windows users.

I don't see why this would be an issue. Maybe add a const enum type and an array (const char *) of keywords to allow. That would act as a filter for the executable to explicitly allow python to execute instead of any application.

I have some code I can copy and paste from a past pet project to get the job done. It's also been reviewed and works fairly well. That would fill the security hole and allow you (and others) to keep the cp.exe and safely execute it.

And since I don't have platforms other than Windows, it would be nice if you can keep watching the issues and solve the problems of these platforms as well.

I can do my best seeing as I can't regularly keep an eye on it. I can check in from time to time to see how it's doing and fix any issues that might crop up.

ghost commented 7 years ago

Update: Just wanted to say I have some test modules almost ready to go. It's looking good, just working on a few kinks and ironing out the code. Should be ready to make a pull request soon.

rafpyprog commented 7 years ago

I made a fork to run Julia and managed to make it works both on Windows and Linux. Take a look: https://github.com/rafpyprog/atom-julia-run

ghost commented 7 years ago

Update: I've gotten it to work on Win32, GNU/Linux, and Mac OS X. I refactored a lot of the original code. So far, with the current branch I've been working on, Win32 seems to have a MojiBake issue. I'm about ready for the pull request. If I can't figure it out by tomorrow night, I'm just going to make the pull request regardless and see if someone else can figure it out.

I created a terminal.js module which handles the terminal request according to the OS. This will allow the user to customize the terminal, terminal option, and the command to execute within the spawned terminal. so far, customization is only available for Win32 and GNU/Linux.

there is a built-in boolean based logging mechanism that can be toggled on and off in the atom-python-run.js module. this is useful for debugging even though it will affect the execution speed.

the cp executable now only executes python programs. the python version can be specified and if it is in the path, it will work. ie. python2, python2.7, python3, or just python

the cp executable also has a built-in logging mechanism as well. this can not be toggled on or off. assuming the executable doesn't run into a serious error, it will log the results of what was executed in the bin directory as cp.log. this is also useful for debugging, assuming it has been written to disk. to avoid excessive growth within the log file, it creates a new log at each run-time.

ghost commented 7 years ago

Update: I'm ready for a pull request. Most of the bugs known to me are fixed and it works on Win32, Linux, and Darwin based systems. A DLL is no longer required for Win32 either. I added a binary for each OS to keep the cp.exe in play. Not sure why, I keep getting this as a response though. Let me know so I can push the results to the repo:

$ git push origin cross-platform
remote: Permission to foreshadow/atom-python-run.git denied to xovertheyearsx.
fatal: unable to access 'https://github.com/foreshadow/atom-python-run.git/': The requested URL returned error: 403

Comments Also, I was wondering if you'd be open to the idea of letting any script based language run? We could add a field in settings and allow the user to change it from python (the default) to any other. For example, if I wanted to use node, bash, or something else.

We could also add a few settings to allow the user to change the shell, its options, and the commands/arguments that follow it.

Something along the lines of:

Settings

Default Settings Example

Customized Settings Example

foreshadow commented 7 years ago

You are pushing it directly to MY repo. In this case, you need to be a collaborator of this repo, which I hadn't set then. You can fork a copy as YOUR repo, commit and push, then make a pull request (click the button) in your repo's home page in the github.com website. Anyway, you are a collaborator now.

foreshadow commented 7 years ago

There are some my comments in #40.

I am just telling you I am now working on these problems. If you have some good ideas, let me know.

  1. The main reason I used cmd /c start is that child_process.spawn cannot spawn cp with a terminal window. I found there is a shell option in child_process.spawn now. I am trying this. And afterwards, file redirecting is hopefully coming true.
  2. The command and args can not be correctly parsed when there is a space in double quotes. Simply use string.split(" ") is not a good idea, I wonder how can I solve this. Also, it is reported in #38 that with a & character it also fails.
  3. I plan to use data-grammer filter in keymap. I don't how to make it convenient for user to customize it.
  4. I am adjusting my code to make it more node.js and modern ES.
foreshadow commented 7 years ago

And I failed to checkout from cross-platform to master branch, it reported file name too long.

error: cannot stat 'bin/cp-darwin13.4.0-x86_64.dSYM/cp-darwin13.4.0-x86_64.dSYM/cp-darwin13.4.0-x86_64.dSYM/cp-darwin13.4.0-x86_64.dSYM/cp-darwin13.4.0-x86_64.dSYM/cp-darwin13.4.0-x86_64.dSYM/Contents/Resources/DWARF/cp-darwin13.4.0-x86_64': Filename too long
ghost commented 7 years ago

cp is not needed... really. You actually don't need cp and can just use the spawn method on its own. I tested it on Windows, Mac, and Linux. It worked every time.

All you have to do is ask for input to pause the screen through JS instead of C. It's basically the same thing, just ported over.

The idea really relies on SoC, or Separation of Concerns. Having the cp program violates that premise completely by creating a polyglot basis.

One file full of modules to determine the OS, another to create the window, another to execute the content, another to freeze (or even unfreeze) the content displayed within the opened instance.

Most of this was taken care of in the terminal.js file when I made my pull request.

Piping The piping issue, that has to do with how the command is issued. That's actually how I created the log for the terminal.js file. I just piped stderr and stdout to stdout.

In your case, you need to redirect stdout to a file instead. You could just create a file instead and write the output to it too, although it would probably just be easier to use spawn instead.

TBH, I haven't tried it and I'm not sure how I would go about it for sure.

Escapism and it's deceptive nature The special symbols, they should be escaped so that they are interpreted as literals. So, instead of just passing "&" as it is, escape it instead "\&".

Since python-run, terminal, and cp all parse it, they each do so in their own unique manner. My guess is that it's fine up until cp executes that string in which case, it should be escaped.

This why SoC helps. You can narrow down where it's happening in a much simpler fashion. Here, you have to search through each of the core source files which created the complexity.

Also, cp has to do it's own parsing to verify what it's executing. This adds a complex C/C++ based parsing library to guarantee that it executes the way you intended. This is why I suggested dumping cp in the first place.

You're violating the golden rule of programming. The DRY principle, or Don't Repeat Yourself. You're re-writing the same code in 2 different languages.

C/C++ have a much finer grain of control and less support for builtin repos than javascript does. At least with js, you can use node and npm (node package manager).

Arbitrary Execution There are actually no hotkey based plugins, which is why this one has grabbed so much attention. You can't just hit F5 and run you code. It is a handy feature and you already wrote the base. So it naturally makes sense to build off of it.

All you really have to do is not demand that the python command is issued, and skip checking for file extensions. The original cp binary ignored all these issues anyways and would just execute any arbitrary statement you gave to it.

For example, I could have a python file that is missing the .py extension so that I can execute it arbitrarily, via CLI (Command Line Interface) or directly executing it (double clicking it so that python executes it). We usually do this by adding a SheBang statement at the beginning of a file.

#!/path/to/python/binary

import some_module

def do_some_stuff():
    more code here

class Object():
    class definition here
    method definition here
    more code...

Then I would name my file some_file instead of some_file.py so I can add it to my PATH variable and execute any where at any time.

Checking for the .py extension explicitly prohibits this even though the OS knows it's a legal action anyways.

The command should just be defined in the settings menu alone and be passed along with any arguments given to it, regardless of the binary, or its intention. This allows the user to define what is executed without complication. As long as it's in the users PATH variable, it will execute.

Handling long filenames

And I failed to checkout from cross-platform to master branch, it reported file name too long.

error: cannot stat 'bin/cp-darwin13.4.0-x86_64.dSYM/cp-darwin13.4.0-x86_64.dSYM/cp-darwin13.4.0-x86_64.dSYM/cp-darwin13.4.0-x86_64.dSYM/cp-darwin13.4.0-x86_64.dSYM/cp-darwin13.4.0-x86_64.dSYM/Contents/Resources/DWARF/cp-darwin13.4.0-x86_64': Filename too long

That has to do with the way win32 handles path and file names. there is a character limit and a character restriction on win32 systems, whereas nix based systems don't really care. You can name a file , -, or / if you wanted to (not that you should).

If you have Windows 10, you can enable WSFL (Windows Subsystem for linux) so that you have a SoC (separation of concerns) and don't mix *nix commands with Win32 commands.

For previous versions, you can install MinGW. MinGW (Miniature GNU for Windows) and use the rm -rv [directory-or-filename-here] command to force delete it after you've added it to the PATH environment variable.

Be careful when executing rm! You can delete everything and do serious damage. Make sure you cd in to the working directory first and specify it as a relative path.

rm -rfv ./cp-darwin13.4.0-x86_64.dSYM

We can always do a live coding session If you'd like, and are able to, we could do a few pair-programming sessions to work out the kinks and test the issues as we write the code. Cloud9 is a live File-Sharing IDE Web Application.

We can chat, work on the code base, connect the repository via GitHub, and go from there. The only caveat is you need Debit or Credit Card to verify your identity.

I already have an account and it's free. I don't make any payments or anything like that. I'm still learning how to use it, but if you'd like to make some time, we can schedule a time that works for both of us.

If you know a similar Web App that does the same thing and doesn't require a DC or CC, I'm open to suggestions.

I have found other solutions, but they are complicated, time consuming, or limited in capability without payment for the service. Cloud9 just works right out of the box.

There are a ton of Online Virtual Machines. I tried codeanywhere, but its slow, and fails a lot. I also have an account with IDEOne. Here's the list of alternatives.

foreshadow commented 7 years ago

If we can add a pause through node.js, I suppose cp is no longer needed. Update But how to do that?

One question for arbitrary extension: What if users accidentally hit F5 when editing a non-python file? Adding the extension filter into settings is still a good idea though.

I am lucky that I had registered my account before a Credit Card became required for registration. My username is infinity25. However, I am not familiar with C9 too. You can tell me how we can start our work.

ghost commented 7 years ago

I would just like to say something that one of my High School peer once told me. Simple is always better. It might not be easy, but it will be better. I wouldn't doubt it if he was qouting Steve Jobs (I would not have known at the time) and in either case, they were both right.

How do we add pause using javascript and node?

this is actually really basic.

$ python some_file.py

Windows example:

C:\Users\UserDirectory> cmd /c start python some_file.py

for python, you can use os.system which is a method that functions very similarly to C/C++ built-in system function. It takes a string, passes it to the OS, and if it is a valid command, it is executed.

Node's child_process.spawn works a lot like pythons subprocess.call method. It takes a list (or array like object) and passes that to the OS to execute it.

child_process.spawn('cmd', ['/c', 'start', 'python', 'some_file.py'])

note: the start command isn't required.

raw_input('Press enter to continue.')

or in C,

fgets(user_input, stdin)

TBH, I don't know the answer to this problem since I've never had to solve it before. node is unique in the sense that it doesn't have an option to do this out of the box. I wouldn't doubt it if there were a module that handles this for you already. I'm sure npm has something somewhere.

This is why I suggested leaving it up to the user. Even Visual Studio doesn't do this for you. Unless you explicitly hard code it in to your program, the window shuts down on you after it completes execution.

Although, I must admit that I'm itching to figure it out. I'm sure there's a way to do it. Just have to figure it out.

What if users accidentally hit F5 when editing a non-python file?

It shouldn't matter. If I want to run the command node some_file.js and node is in my PATH, it should just work. Else, it should fail.

An abrupt and vivid failure is always better than a silent one. We could always create a try...catch statement to handle those exceptions gracefully.

Regardless, it should always just work as long as the user configured it correctly or left the default settings. If not, it will fail and should indicate an error as to why.

Adding the extension filter into settings is still a good idea though.

I don't see any problem with adding an extension filter as long as it is optional and the user can configure it by creating a list of allowed extensions. A nice safety net of sorts.

However, I am not familiar with C9 too. You can tell me how we can start our work.

One of us creates a Container, sets it up, and configures it to allow a given user to collaborate. Theres a button on the top right that lets you give users read and write access to the Workspace environment within the Container itself. I've never done it, but I'm always willing to learn.

foreshadow commented 7 years ago

So we still need cp... I suppose a great number of users has gotten used to it. However, I am trying other ways to do this.

You can try this link to enter my workspace https://ide.c9.io/infinity25/atom-python-run .

ghost commented 7 years ago

I did some searches, and the process that's created has a stdin, stdout, and stderr attribute. apparently there isn't a direct way to do it. we might have to do some tinkering. I found that readline, process, and child_process all have the same properties.

node doc for readline

straight from the docs

The following simple example illustrates the basic use of the readline module.

const readline = require('readline');

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout
});

rl.question('What do you think of Node.js? ', (answer) => {
  // TODO: Log the answer in a database
  console.log(`Thank you for your valuable feedback: ${answer}`);

  rl.close();
});

source: readline

ghost commented 7 years ago

The c9 link doesn't work.

You'll have to go to share (the button next to the gear in the top right).

In the Share This Workspace panel that opens, at the bottom of the panel, in the invite field, use my user name xovertheyearsx and click invite, then click done.

foreshadow commented 7 years ago

I Invited you.

The command of child_process is cmd /c start python, I suppose this readline can only add pause to the cmd command, not affecting the commands afterwards.

ghost commented 7 years ago

cmd is a process that executes the sub-process python. python exists in a sub-shell, so we need to tell the sub-shell to stay open until the user responds.

The idea would be to catch the child process right after python finishes executing while pausing before ending the parent process.

Ok, sounds good. I should be on tomorrow. I'll keep an eye out to see if youre logged in.

ghost commented 7 years ago

i couldnt figure out how to do it. i was wrong. maybe one day i'll figure it out. today is not that day. cpis needed for windows, mainly as the timer.

nix has a system time command that pretty much does the same thing. powershellactually has a time-like command that works simliarly to the nix version.

https://superuser.com/questions/228056/windows-equivalent-to-unix-time-command

im just not as familiar with cmdand powershell as i am with bash.

i figured out how to time the process, but there was no way to keep the terminal open on windows. so yes, cp is still needed. i was just so convinced there was a way!

https://social.technet.microsoft.com/wiki/contents/articles/7703.powershell-running-executables.aspx

http://www.computerperformance.co.uk/powershell/powershell_command_execute.htm

i guess what we can do is just add a small parser to escape certain characters for win32. hopefully we can track down that special character bug.

ghost commented 7 years ago

So I was looking at atom-runner package and it hit me. spawn output is meant to be printed to stdout. spawn also is meant to run asynchronously along side other processes. The dev just basically takes the output and puts it in a view that was constructed by the addon.

I now understand why you were using cp and I guess there may be a way... Honestly, I don't have the time to get too deep in to JavaScript or Coffeescript, as much as I'd like to. My focus is currently to get into Python. Which is why I became interested in this package in the first place.

I hope you don't mind that I rewrote cp again. The reason why I rewrote it was because I was tired of compiling the binaries for each system. Also, I had to manually create everything from scratch. So I ported the cp program to python. I also re-factored some of the terminal module.

It works as expected with a bunch of new useful features.

this basically allows commands to be executed like so

cmd /c start python cp/main.py python hello.py

prints:

Hello, World!

Process returned 0 (0x0)    execution time : 0.000237 s

Press [ENTER] to continue...

NOTE: Piping will have to be implemented in the front end, while being able to transmit the appropriate commands to terminal and then to cp. Either that, or we'll have to figure out a way to auto-magically handle it in the background. That way the proper arguments are passed when necessary.

python cp/main.py -p '/path/to/output.file' '&>' node hello.js

will pipe stdout and stderr to output.file and print any contents to stdout within the executed terminal instance.

What I thought might make this useful was the idea of placing the -p option along with filename and symbol arguments to be used.

I designed it to be as flexible as possible while imposing minimal restrictions that were required for syntactic means. Any of the follow should execute as expected.

python cp/main.py python -h
...prints help information...

python cp/main.py python hello.py
...executes hello.py in a new tty instance...

python cp/main.py python -i hello.py
...executes hello.py in a new tty instance and enters interpreter afterwards...

python cp/main.py -p 'some.file.log' '>' python -i hello.py
...executes hello.py and writes stdout to some.file.log...

python cp/main.py -p 'some.file.log' '>>' python -i hello.py
...executes hello.py and appends stdout to some.file.log...

python cp/main.py -p 'some.file.log' '&>' python -i hello.py
...executes hello.py and writes stdout and stderr to some.file.log...

python cp/main.py -p 'some.file.log' '&' python -i hello.py
...doesn't work yet, i haven't finished implementing this alias feature...

python cp/main.py node hello.js
...executes hello.js using node...

My only complaint is that clock() is indeterminate. This couldn't be helped and C/C++ face the same issue since Pythons clock() is just wrapper of the C version.

I created a new branch called process so when I'm ready to do my pull request, it doesn't mess up the main branch.

I did this on my local machine, I can always post it at c9.io if you'd like. Let me know what you think.

foreshadow commented 7 years ago

I am at home on vaction these days. I will read it carefully in two days. Besides, actually the version number will be automatically generated by atom package manager when I call apm publish command, you don't need to change it manually. The PR is ok, you don't need to change it. I will fix it in later commits.

foreshadow commented 7 years ago

The new cp.py is excellent, and the log file helped a lot when I was debugging.

I changed the system command into subprocess.call for more compatibility. I also added a setting and changed a little in terminal.js making users able to disable the cp.

The commit and push (in the master branch) is made, you can check it, then I will publish the new version.

TODO:

  1. File redirection is still not supported, I think we may need to add a setting (better then parsing it manually from the command) to get the file path and add it into the arguments of subprocess.call. I think I will leave this into the next version. I forgot you have implemented it and didn't look into the -p option. If it works now, you can add some instructions into readme and reply to the issue #39.
  2. cmd is set as a constant in terminal.js, while there is someone (#32) who want use another prompt. It is not that easy for me to change the code with current structure. I wonder if you have a good idea.

Clock is rough, I know. I didn't mean to make an accurate timer, as the prototype - CodeBlocks' is also rough. I am just thinking a rough timer is enough for me. It will stuck for hundreds of milliseconds if the CPU is busy when trying to create a process. We cannot record this using clock(). Maybe calling system API is required, which would be another big issue.

C9.io is a good platform, but I found it not so convenient that I cannot execute to test it directly because it is not in the atom environment. So I think it is a good place to communicate ideas (github.com is also good though) but not to write the atom plugin package code.

ghost commented 7 years ago

The new cp.py is excellent, and the log file helped a lot when I was debugging.

That's good news! I'm glad you liked it and that's why I added the logging mechanism. It simplifies it and aides in figuring out what the core issue might be.

I changed the system command into subprocess.call for more compatibility. I also added a setting and changed a little in terminal.js making users able to disable the cp.

Thats fine. system and call pretty much do the same thing... just in different ways.

I forgot you have implemented it and didn't look into the -p option. If it works now, you can add some instructions into readme and reply to the issue #39.

That shouldn't be a problem. It should still work regardless. I'll take a look anyways just to make sure it works as expected.

cmd is set as a constant in terminal.js, while there is someone (#32) who want use another prompt. It is not that easy for me to change the code with current structure. I wonder if you have a good idea.

That's because I encapsulated most of the functionality which is basically determining the OS, the Shell, and the Arguments to be passed to that Shell (which is usually a execution option). For the most part, we can expect that behavior to remain constant (at least for the time being).

In my original design, I knew that even I might want to change the terminal, its options, and other data that may be passed to that terminal. That's why I implemented it. You actually don't have to modify anything in terminal.js to achieve that either.

The only object in terminal.js that is meant to be modified is type. You can change the terminal as you wish as long as you modify the option to be passed to it. Here's an example.

In atom-python-run.js, just before you call spawn, you can modify the has_a_tty property and then call spawn afterwards.

terminal.type.has_a_tty.shell = 'powershell';
terminal.type.has_a_tty.option = ['-noexit', '"& "'];
let tty = terminal.type.spawns_tty(...cmds);

I made it simple to modify these properties on purpose when I first wrote the prototype for the script.

source: invoking powershell from cmd

Clock is rough, I know. I didn't mean to make an accurate timer, as the prototype - CodeBlocks' is also rough. I am just thinking a rough timer is enough for me.

That's okay, not a big deal in my opinion. That's why I noted the timeit module would be better for measuring when required.

C9.io is a good platform, but I found it not so convenient that I cannot execute to test it directly because it is not in the atom environment. So I think it is a good place to communicate ideas (github.com is also good though) but not to write the atom plugin package code.

I agree. It's great for live coding even though I wasn't really a fan of it. I can do all that stuff locally and I'm okay with communicating on Github. I just thought it would be a good way to communicate while the code was being written, do a remote push so that we expect the same code base, test it locally, and so on and so forth.

ghost commented 7 years ago

Just as a FYI, if you have a hard time understanding the terminal.js file, feel free to lmk. That's a sign that it needs to be refactored so that it's legible and coherent. Also, you should know, I've written very little javascript code and terminal.js was technically my first script in javascript.

This was a general outline I made to help me map it out.

source_map

type is the public object that's used. I import the module as terminal for readability. Then call the type object and it's method spawn_tty. Which, looks like what we use in the atom-python-run.js file.

When terminal is imported, type has already been set with has_a_tty because the object is only created once it's been imported.

The rest is configured during each call depending on the call type and the properties that are modified.

Honestly, looking back at it now. It's horrific, lol.

ghost commented 7 years ago

This is complete as of version Prepare 0.9.0 release . It should work as expected now with the Prepare 0.9.2 release.