Open wyw862788 opened 7 months ago
thanks for reaching out. This program does not produce a machine learning model. Rather, it creates the dataset that you feed into a model. This approach offers advantages : the freedom to try different models, and a simpler, focused code base.
The suggested tools that created the model and visual is an example. The success I got came from a manual review of the outputed data.
The manual strategy I used to bet was based on Bernouilli trials:
p
) to take value True. (even though p
could slightly vary, it was ok for estimating)k
times the same value v1
(exemple for k
=8 v1
=False : the last 8 values were False, and I wanted to estimate probability of the next value being True)B(k, p)
that the winning number for the middle column would be among the 3. For details on what the feature Does
Dear JeffMv,
I'm glad to receive your reply. I'm wondering about the following line of code, located within the function studyUniverseLengths:
previousSize = len(previousFrame)
Is it a typo? Shouldn't it be:
previousSize = len(previousUniverse)
Additionally, I have successfully run your code featuresUpdater.py, and it generated the result file univ-length-over10.tsv.
Now, I'd like to know how to use the generated result file for prediction. Could you provide an example code for creating a model and making predictions using the result file?
Much appreciated!
Thanks for your contribution. It might be a typo - I will have to check the output.
Ultimately, what a model would produce using the files are recommandations of feature values, and you need to be familiar with these feature to translate them to what numbers are recommanded. To use the generated file, you might try to make sense of what the features mean (see comments in code regarding each feature, and so one).
I didn't provide a code for generating a model. The goal of a model is simple:
You can start by making a simple model by using the scikit-learn
library to get the hand of how to train a model and how the prediction error is supposed to decrease the more you train ("fit") the model. (Lots of tutorials available)
Dear JeffMv,
After repeatedly reading your articles and replies, I have a basic understanding of your intentions. I have also reproduced your graphical results using Rapid Miner software.
So, can I construct models for prediction using Rapid Miner software, or can I write my own Python code and use my preferred models for prediction? Is that correct?
Inspired by you, I plan to modify some of your code. My idea is to divide the numbers into two groups based on their magnitude (56789 and 01234), or based on odd and even numbers (02468 and 13579).
I will then count the occurrences of larger or even numbers in the last 10 draws. The process of generating data features will be similar to your original code.
However, this approach will predict whether the next draw will have larger (smaller) numbers, or odd (even) numbers.
Do you think this approach is feasible?
Best regards,
------------------ 原始邮件 ------------------ 发件人: "JeffMv/Lofea" @.>; 发送时间: 2024年4月8日(星期一) 凌晨4:52 @.>; @.**@.>; 主题: Re: [JeffMv/Lofea] Hello JeffMv (Issue #4)
Thanks for your contribution. It might be a typo - I will have to check the output.
Ultimately, what a model would produce using the files are recommandations of feature values, and you need to be familiar with these feature to translate them to what numbers are recommanded. To use the generated file, you might try to make sense of what the features mean (see comments in code regarding each feature, and so one).
I didn't provide a code for generating a model. The goal of a model is simple:
you select a column you would like to predict.
the model uses other columns of a row to predict the target column (you selected)
You can start by making a simple model by using the scikit-learn library to get the hand of how to train a model and how the prediction error is supposed to decrease the more you train ("fit") the model. (Lots of tutorials available)
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Dear JeffMv
Why does it seem like the program is being executed twice when running makeFeatures? I've checked the code but couldn't find the reason. Can you provide any assistance?
Many thanks!
PS D:\python\Lofea-master2\eulolib> python featuresUpdater.py --makeFeatures --gameId=3d --draws="D:/python/Lofea-master2/data/example-inputs/3d.txt" --saveDir="3d" D:\python\Lofea-master2\eulolib E:\Python\python310.zip E:\Python\DLLs E:\Python\lib E:\Python E:\Python\lib\site-packages E:\Python\lib\site-packages\win32 E:\Python\lib\site-packages\win32\lib E:\Python\lib\site-packages\Pythonwin D:\python\Lofea-master2
D:\python\Lofea-master2\eulolib E:\Python\python310.zip E:\Python\DLLs E:\Python\lib E:\Python E:\Python\lib\site-packages E:\Python\lib\site-packages\win32 E:\Python\lib\site-packages\win32\lib E:\Python\lib\site-packages\Pythonwin D:\python\Lofea-master2 D:\python\Lofea-master2
Saving to: D:/python/Lofea-master2/data/example-inputs\computed\3d\col-1\univ-length-over10.tsv Saving to: D:/python/Lofea-master2/data/example-inputs\computed\3d\col-2\univ-length-over10.tsv Saving to: D:/python/Lofea-master2/data/example-inputs\computed\3d\col-3\univ-length-over10.tsv Saving to: D:/python/Lofea-master2/data/example-inputs\computed\3d\col-1\univ-length-over10.tsv Saving to: D:/python/Lofea-master2/data/example-inputs\computed\3d\col-2\univ-length-over10.tsv Saving to: D:/python/Lofea-master2/data/example-inputs\computed\3d\col-3\univ-length-over10.tsv
------------------ 原始邮件 ------------------ 发件人: "JeffMv/Lofea" @.>; 发送时间: 2024年4月8日(星期一) 凌晨4:52 @.>; @.**@.>; 主题: Re: [JeffMv/Lofea] Hello JeffMv (Issue #4)
Thanks for your contribution. It might be a typo - I will have to check the output.
Ultimately, what a model would produce using the files are recommandations of feature values, and you need to be familiar with these feature to translate them to what numbers are recommanded. To use the generated file, you might try to make sense of what the features mean (see comments in code regarding each feature, and so one).
I didn't provide a code for generating a model. The goal of a model is simple:
you select a column you would like to predict.
the model uses other columns of a row to predict the target column (you selected)
You can start by making a simple model by using the scikit-learn library to get the hand of how to train a model and how the prediction error is supposed to decrease the more you train ("fit") the model. (Lots of tutorials available)
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Hi, regarding
can I construct models for prediction using Rapid Miner software, or can I write my own Python code and use my preferred models for prediction? Is that correct?
yes you can use that approach. as usual in machine learning, divide the dataset (rows) into a training and test set (and validation set). then train the model on the training dataset. test the accuracy on the test set, etc
regarding dividing into magnitude or odd numbers, it keeps equiprobability, so that's a good thing. (even though it has drawbacks in a fully random experiment like a well randomized lottery) i d say keep a equiprobable subdivision (by odds/magnitude) and the corresponding model at hand for comparing as baseline/benchmark when you'll do more complex subdivisions. (the universe length... is such a more complex subdivision without equiprobability).
there are a few things I will need to review this weekend.
Dear JeffMv I am researching a Chinese lottery similar to Magic3, so I referred to your method.
Using AI to build models, the accuracy on the test set can reach over 90%, even 100%.
However, predicting future data doesn't yield very ideal results.
Many times, I've already given up on the research because it seems that lottery numbers really cannot be predicted.
Wish you good luck.
------------------ 原始邮件 ------------------ 发件人: "JeffMv/Lofea" @.>; 发送时间: 2024年4月10日(星期三) 下午2:20 @.>; @.**@.>; 主题: Re: [JeffMv/Lofea] Hello JeffMv (Issue #4)
Hi, regarding
can I construct models for prediction using Rapid Miner software, or can I write my own Python code and use my preferred models for prediction? Is that correct?
yes you can use that approach. as usual in machine learning, divide the dataset (rows) into a training and test set (and validation set). then train the model on the training dataset. test the accuracy on the test set, etc
regarding dividing into magnitude or odd numbers, it keeps equiprobability, so that's a good thing. (even though it has drawbacks in a fully random experiment like a well randomized lottery) i d say keep a equiprobable at hand for comparing as baseline/benchmark when you'll do more complex subdivisions. (the universe length... is such a more complex subdivision without equiprobability).
there are a few things I will need to review this weekend.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Hello JeffMv
I recently came across a project on GitHub. I'm not sure if it would be helpful for us. https://github.com/time-series-foundation-models/lag-llama
Because the website of google Colab is inaccessible from my country, could you please send me the source code from the following two links? If there are any dataset CSV files, please include them as well. Thank you very much.
Colab Demo 1 Colab Demo 2
------------------ 原始邮件 ------------------ 发件人: "JeffMv/Lofea" @.>; 发送时间: 2024年4月10日(星期三) 下午2:20 @.>; @.**@.>; 主题: Re: [JeffMv/Lofea] Hello JeffMv (Issue #4)
Hi, regarding
can I construct models for prediction using Rapid Miner software, or can I write my own Python code and use my preferred models for prediction? Is that correct?
yes you can use that approach. as usual in machine learning, divide the dataset (rows) into a training and test set (and validation set). then train the model on the training dataset. test the accuracy on the test set, etc
regarding dividing into magnitude or odd numbers, it keeps equiprobability, so that's a good thing. (even though it has drawbacks in a fully random experiment like a well randomized lottery) i d say keep a equiprobable at hand for comparing as baseline/benchmark when you'll do more complex subdivisions. (the universe length... is such a more complex subdivision without equiprobability).
there are a few things I will need to review this weekend.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I don't know if you're still working on magic3? But it seems like there are some issues with your code, causing it not to run properly. Could you simplify your process? It only needs two parts: First, reading data, training the model, and saving the model. Second, calling the model to make predictions. Also, historical data only needs to include 3 winning numbers, nothing else. I'm a beginner in Python, could you provide assistance? Thank you very much.