uxmal / pytocs

Converts Python source to C#
Apache License 2.0
552 stars 167 forks source link

HI #54

Closed iammobina closed 5 years ago

iammobina commented 5 years ago

how can i use this ?please explain me clearly

uxmal commented 5 years ago

I tried reaching you on gitter but got no reply.

Pytocs is a translating tool that translates Python source code to C#. This is done by giving pytocs either the name of a Python source file or the name of a file system directory containing Python files. The tool then generates, for each python file, a corresponding translated C# file.

The README.md file contains the instructions for how to build and use the command line version of Pytocs. You can also use the pytocs GUI shell (provided by @SWATOPLUS) if you prefer working that way.

iammobina commented 5 years ago

Thanks for your reply. I've read readme.md but i can't understand it well. because i opened pytocs.sln and I've run Cali when i wrote this line : pytosc filecontroller.py I've got an exception that -(1): Expected token type NEWLINE, but saw ID how can i fix this ? please help me.

On Tue, May 28, 2019, 2:17 PM John Källén notifications@github.com wrote:

I tried reaching you on gitter but got no reply.

Pytocs is a translating tool that translates Python source code to C#. This is done by giving pytocs either the name of a Python source file or the name of a file system directory containing Python files. The tool then generates, for each python file, a corresponding translated C# file.

The README.md file contains the instructions for how to build and use the command line version of Pytocs. You can also use the pytocs GUI shell (provided by @SWATOPLUS https://github.com/SWATOPLUS) if you prefer working that way.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/uxmal/pytocs/issues/54?email_source=notifications&email_token=AMFQHQWSKX4EW32KXMVJCN3PXT5SJA5CNFSM4HP2MYT2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWLSTII#issuecomment-496445857, or mute the thread https://github.com/notifications/unsubscribe-auth/AMFQHQT3VAHGB3YI3AVSZPTPXT5SJANCNFSM4HP2MYTQ .

uxmal commented 5 years ago

Here is what happens when I compile the pytocs.sln solution and use the resulting command line program pytocs.exe (which is on my $PATH) to translate a Python file:

C:\tmp\test>copy con test.py
def foo(a, b):
    return a + b
^Z
        1 file(s) copied.

C:\tmp\test>pytocs test.py

C:\tmp\test>type test.py.cs

public static class test {

    public static object foo(object a, object b) {
        return a + b;
    }
}

C:\tmp\test>

I captured this output from cmd.exe. Could you do the same thing on your end to show me exactly how you are using the tool?

SWATOPLUS commented 5 years ago

@iammobina Could you send your source file filecontroller.py ? Maybe this file contains some code which is currently unsupported by pytocs. If you send this file, I will test it and try to fix this issue.

uxmal commented 5 years ago

@SWATOPLUS has a point: although Pytocs should be able to parse most Python 2.x and Python 3.x code, there may naturally be bugs in the Python parser. However, I've seen this error before and most of the time it's because the pytocs command line tool isn't being used correctly.

iammobina commented 5 years ago

On Thu, May 30, 2019 at 2:44 AM John Källén notifications@github.com wrote:

@SWATOPLUS https://github.com/SWATOPLUS has a point: although Pytocs should be able to parse most Python 2.x and Python 3.x code, there may naturally be bugs in the Python parser. However, I've seen this error before and most of the time it's because the pytocs command line tool isn't being used correctly.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/uxmal/pytocs/issues/54?email_source=notifications&email_token=AMFQHQXISKRLNACIY3KJBW3PX353XA5CNFSM4HP2MYT2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWQZGZA#issuecomment-497128292, or mute the thread https://github.com/notifications/unsubscribe-auth/AMFQHQUPXDZXU5KISU5H5S3PX353XANCNFSM4HP2MYTQ .

class fileController: def init(self,inputAddress): self.inputAddress = inputAddress self.alphabet = [] self.states_count = 0 self.starting_states = [] self.final_states = [] self.states = [] self.grammer = [] self._fileHandler()

def _fileHandler(self):
    file = open(self.inputAddress,'r')
    lines = file.readlines()
    self.states_count = int(lines[0])
    self.grammer = lines[2:]
    lines[1] = lines[1].strip()
    self.alphabet = lines[1].split(',')
    self._stateHandler(self.grammer)
    self.states.sort()
    self._grammer_corrector()        
    return [self.states_count,self.alphabet,self.states,self.starting_states,self.final_states,self.grammer]

def _stateHandler(self,unknown_grammer):
    for gram in unknown_grammer:
        gram = gram.strip()
        splited_gram = gram.split(',')
        temp = splited_gram[0]
        if("->" in temp):
            if(temp not in self.starting_states):
                self.starting_states.append(temp)
        elif("*" in temp):
            if(temp not in self.final_states):
                self.final_states.append(temp)
        else:
            if(temp not in self.states):
                self.states.append(temp)
        temp = splited_gram[2]
        if("->" in temp):
            if(temp not in self.starting_states):
                self.starting_states.append(temp)
                temp = temp[2:]
                if temp not in self.states:
                    self.states.append(temp)
        elif("*" in temp):
            if(temp not in self.final_states):
                self.final_states.append(temp)
        else:
            if(temp not in self.states):
                self.states.append(temp)

def _grammer_corrector(self):
    self.grammer = [i.replace("->",'') for i in self.grammer]
    for i in range(len(self.grammer)):
        self.grammer[i] = self.grammer[i].strip()
    self.grammer.sort()

!/usr/bin/python3

EPS = '_' START = 'x' END = 'y' fin = None fout = None

def write(s): if fout: fout.write(s) else: print(s),

def str_set(s): x = list(s) if x: y = [] res = '{' c = [] for i in x: if i.isdigit(): y.append(int(i)) else: c.append(i) y.sort() if START in c: res += START.upper() + ', ' for i in y: res += '%d' % i + ', ' if END in c: res += END.upper() + '}' else: res = res[:-2] + '}' return res return '{}'

def eps_closure(nfa, node_set): if node_set == set([]): return node_set res = node_set.copy() for node in node_set: next_list = nfa.get(node) if next_list: for next in next_list: if next[1] == EPS: res.add(next[0]) if next[0] != node: res |= eps_closure(nfa, set([next[0]])) return res

def next_set(nfa, now_set, c): res = set([]) for node in now_set: next_list = nfa.get(node) if next_list: for next in next_list: if next[1] == c: res.add(next[0]) return res

def main(): nfa = {} lit = set([])

for s in fin:
    e = s.lower().split()
    if nfa.get(e[0]):
        nfa[e[0]].append((e[1], e[2]))
    else:
        nfa[e[0]] = [(e[1], e[2])]
    lit.add(e[2])
lit.remove(EPS)
liter = list(lit)
liter.sort()
q = [eps_closure(nfa, set([START]))]
status = [q[0]]
dfa_str = ''
dfa = {}
end_node = []
end_nodes = []
mid_node = []
while q:
    now = q.pop(0)
    i = status.index(now)
    now_index = '%d' % i
    end_str = ''
    if END in now:
        end_str = '*'
        end_node.append(i)
        end_nodes.append("q"+str(i))
    else:
        mid_node.append(i)
    # write(str_set(now) + ' ')
    dfa_str += end_str +"q"+ now_index + ' '
    # print(dfa_str)
    next_dict = {}
    for c in liter:
        next = eps_closure(nfa, next_set(nfa, now, c))
        if not next in status and next:
            q.append(next)
            status.append(next)
        j = status.index(next) if next else -1
        next_index = '%d' % j
        # write(str_set(next) + ' ')
        dfa_str += "q"+next_index + ' '
        next_dict[c] = j
    # write('\n')
    dfa_str += '\n'
    dfa[i] = next_dict
# write('\ns %s\n%s\n' % (' '.join(liter), dfa_str))

print(end_nodes)

answer = str(len(status)) + "\n" + '%s\n%s\n' % (','.join(liter), dfa_str)
# print(dfa_str.splitlines(),liter)

listOfDfaStr = dfa_str.splitlines()

for i in range(len(listOfDfaStr)):
    x = listOfDfaStr[i].split(" ")[:len(listOfDfaStr[i].split(" "))-1]
    listOfDfaStr[i] = x
# print(listOfDfaStr)

ToPrint = ""

for i in range(len(listOfDfaStr)):
    source = listOfDfaStr[i][0]
    for j in range(len(listOfDfaStr[i])):
        if j != 0:
            if j != len(listOfDfaStr[i]) -1:
                if source == listOfDfaStr[0][0]:
                    ToPrint +="->" + source + "," +liter[j-1] + ","  + listOfDfaStr[i][j] + "\n"
                else:
                    ToPrint += source + "," +liter[j-1] + ","  + listOfDfaStr[i][j] + "\n"
            else:
                ToPrint += source + "," +liter[j-1] + ","  + listOfDfaStr[i][j] + "\n"
# print(ToPrint,ToPrint.find("q0"))

for final in end_nodes:
    print(final)
    prevoius_final = 0
    while ToPrint.find(final,prevoius_final) != -1:
        border = ToPrint.find(final,prevoius_final)
        if ToPrint[border-1] != "*":
            ToPrint = ToPrint[:border] + "*" + ToPrint[border:]
            prevoius_final = border + 3
        else:
            prevoius_final = border + 3
print(ToPrint)

write(ToPrint)

# print('s %s\n%s\n' % (' '.join(liter), dfa_str))
# print(liter,dfa_str )
q = [[end_node, True], [mid_node, True]]
fresh = True
while fresh:
    now = q[0]
    for c in liter:
        next = {}
        for i in now[0]:
            if dfa[i][c] == -1:
                if next.get(-1):
                    next[-1].append(i)
                else:
                    next[-1] = [i]
            else:
                j = 0
                for x in q:
                    if dfa[i][c] in x[0]:
                        if next.get(j):
                            next[j].append(i)
                        else:
                            next[j] = [i]
                    j += 1
        splited = True
        now_split = next.values()
        if now[0] in now_split:
            splited = False
        else:
            for x in now_split:
                q.append([x, True])
            break
    q.pop(0)
    if not splited:
        q.append([now[0], False])
    fresh = False
    for x in q:
        if x[1] == True:
            fresh = True
            break
split = [x for x, y in q]
split.sort()
# write(str(split).replace('[', '{').replace(']', '}') + '\n')
for x in split:
    if len(x) > 1:
        rep = x[0]
        for i in range(1, len(x)):
            for j in dfa:
                for c in liter:
                    if dfa[j][c] == x[i]:
                        dfa[j][c] = rep
            del dfa[x[i]]
# write('\ns %s\n' % (' '.join(liter)))
# for i in dfa:
#     write('%d%s ' % (i, '*' if i in end_node else ''))
#     for c in liter:
        # write('%d ' % dfa[i][c])
    # write('\n')
fin.close()
fout.close()

if name == 'main': import os

input = open("input.txt",'r')
output = open("nfa_0.txt","w")

input_file = input.readlines()

initial_state = None

final_state = None

for line in range(len(input_file)):
    if line >=2:
        rule = input_file[line].replace("\n", "").split(",")
        rule_to_write=''
        if "->" in rule[0]:
            initial_state = rule[0][2:]
            # print(initial_state)
        # elif "*" in rule[0]:
        #     final_state = rule[0][1:]
        #     print(final_state)

        for part in range(len(rule)):
            if "->" in rule[part]:
                rule[part] = rule[part].replace("->",'')
            if "*" in rule[part]:
                rule[part] = rule[part].replace("*",'')
                final_state = rule[part]
            if initial_state != None and initial_state in rule[part]:
                rule[part] = rule[part].replace(initial_state,"x")
            elif final_state != None and final_state in rule[part]:
                rule[part] = rule[part].replace(final_state,'y')

        rule_to_write = " ".join([rule[0],rule[2],rule[1]])
        # print(rule_to_write,"asasas")
        rule_to_write += "\n"
        output.write(rule_to_write)
output.close()

now_dir = os.path.dirname(os.path.realpath(__file__))
files = [x for x in os.listdir(now_dir) if os.path.isfile(x) and x.endswith('txt') and x.startswith('nfa_')]
for x in files:
    fin = open(x, 'r')
    fout = open(x.replace('nfa_', 'dfa_'), 'w')

main()

5 0,1 ->q0,1,q3 q0,0,q1 q2,0,q1 q1,0,q2 q1,1,q4 q4,0,q4 q4,1,q4 q3,0,q2 q3,1,q4 q2,1,q4 3 0 1 ->g0,0,g1 g0,1,g1 g1,0,g1 g1,1,g2 g2,0,g2 g2,1,g2

from Q2 import find_relevent_grammers,find_group

class outPutController: def init(self,sett,grammer,alphabet,starting_states): self.outputAddress = "output.txt" self.alphabet = alphabet self.states_count = len(sett) self.starting_states = [] self.final_states = [] self.states = [] self.grammer = [] self._state_generator(sett,starting_states) self._grammer_generator(grammer,sett) self._file_handler() def _state_generator(self,sett,starting_states): for i in range(len(starting_states)): starting_states[i] = starting_states[i][2:] for i in range(len(sett)):

        if('*' not in sett[i][0]):
            self.states.append('g'+str(i))
            for j in range(len(sett[i])):
                flag = False
                for j in starting_states:
                    if(j in sett[i]):
                        flag = True
                        break
                if(flag):
                    self.starting_states.append('->g'+str(i))
                    break

        else:
            self.final_states.append('*g'+str(i))
def _grammer_generator(self,grammer,sett):
    for i in range(len(sett)):
        temp = sett[i][0]
        grams = self.find_relevent_grammers(grammer,temp)
        for gram in grams:
            gram = gram.split(',')
            source_index = self.find_group(gram[0],sett)
            sink_index = self.find_group(gram[2],sett)
            source_index = str(source_index)
            sink_index = str(sink_index)
            source = 'g'+source_index
            sink = 'g'+sink_index
            if("*"+source in self.final_states):
                source = '*'+source
            if("*"+sink in self.final_states):
                sink = '*'+sink
            self.grammer.append(source+','+gram[1]+','+sink)
    for initstate in self.starting_states:
        temp = initstate[2:]
        for i in range(len(self.grammer)):
            if(self.grammer[i].find(temp) == 0):
                self.grammer[i] = "->"+ self.grammer[i]
                break

def find_relevent_grammers(self,grammer,state):
    temp_grams=[]
    for gram in grammer:
        if(gram.find(state)==0):
            temp_grams.append(gram)
    return temp_grams

def find_group(self,state,former_set):
    for i in range(len(former_set)):
        if(state in former_set[i]):
            return i
def _file_handler(self):
    file = open(self.outputAddress,'w')
    file_content = [self.states_count,' '.join(self.alphabet)]+self.grammer
    file_content[0] = str(file_content[0])
    file_content[1] = str(file_content[1])
    for i in range(len(file_content)):
        file_content[i] += '\n'
    file.writelines(file_content)
    file.close()

from fileController import fileController handler = fileController("input.txt") print(handler._fileHandler()) from fileController import fileController from outPutController import outPutController def minimization_handler(handler,new_set,former_set):

latter_set = []

if(len(new_set)==1):
    return [new_set]
elif(len(new_set)>1):
    chart = state_chart_creator(handler,new_set,former_set)
    groups = []
    for state in chart.keys():
        flag = False
        for i in range(len(groups)):
            if(chart[groups[i][0]] == chart[state]):
                groups[i].append(state)
                flag = True
                break
        if(not flag):
            groups.append([state])
    return groups

def find_relevent_grammers(grammer,state): temp_grams=[] for gram in grammer: if(gram.find(state)==0): temp_grams.append(gram) return temp_grams

def state_chart_creator(handler,new_set,former_set): chart = {} for state in new_set: grammers = find_relevent_grammers(handler.grammer,state) chart_specifications = [] for grammer in grammers: splited_grammer = grammer.split(',') chart_specifications.append(find_group(splited_grammer[2],former_set)) chart[state] = tuple(chart_specifications) return chart

def find_group(state,former_set): for i in range(len(former_set)): if(state in former_set[i]): return i

handler = fileController("input.txt") former_set = [handler.states,handler.final_states] latter_set = [] while True: latter_set = [] for new_set in former_set: latter_set+=minimization_handler(handler,new_set,former_set) if(len(latter_set) != len(former_set)): former_set = latter_set else: break minimized_dfa = outPutController(latter_set,handler.grammer,handler.alphabet,handler.starting_states)

print(minimized_dfa.states_count)

print(minimized_dfa.alphabet)

for i in minimized_dfa.grammer:

print(i)

uxmal commented 5 years ago

I copied the code above to a file, calling it test.py, commented out lines 328-347, consisting mostly of numbers, commas, asterisks and the '->' symbol, and ran pytocs like this on the command line:

D:\dev\tmp>pytocs -r .
AST cache is at: C:\Users\jkl\AppData\Local\Temp\pytocs\ast_cache
100% (1 of 1)   SPEED:   250/s   AVG SPEED:   166/s       Finished loading files. 18 functions were called.
Analyzing uncalled functions.
100% (1 of 1)   SPEED:     0/s   AVG SPEED:     1/s
---------------- Analysis symmary ----------------
- total time: 00:00:00
- modules loaded: 1
- semantic problems: 1
- failed to parse: 0
- number of definitions: 1199
- number of cross references: 1143
- number of references: 703
- resolved names: 596
- unresolved names: 3
- name resolve rate:  99%

D:\dev\tmp>

I've attached the Python file and the resulting C# file (packed in a zip file).

@iammobina: please try doing exactly what I did above, making sure that pytocs.exe is on your PATH. test.zip

uxmal commented 5 years ago

Closing due to inactivity. If your issue isnt resolved, reopen.