Open ooper-zz opened 1 year ago
For a variety of reasons, the project in its current state will not catch on beyond people who used the original HyperCard, those who want to research the original, and those who want to make projects with a unique retro aesthetic (see: people who make PS1-style horror games). None of those people will care for ChatGPT compatibility, especially given it's shortcomings (despite what people seem to think, ChatGPT is not perfect, and it is especially not good with programming - the exception now being Python, due to GPT-4's built-in interpreter).
If you want the project to appeal to the modern consumer, there are a multitude of changes you need to make, to the point where you may as well just make a new project (and at that point, you have the freedom to not have to bother with a new programming language. Just save tokens by asking ChatGPT to create a json file that corresponds to what you want to do).
Also ChatGPT is going to create slightly new code every time you ask it the same question. I can't imagine programming with only a text prompt to ChatGPT, it would be a debugging nightmare.
I realize our goal for trying to stay true to good-old Hypercard, but a UI by inference would be a cutting-edge approach for Wyldcard to build user interfaces that leverages natural language processing and machine learning techniques to generate UI elements on the fly based on text input. With UI by inference, developers can create highly flexible and customizable conversational interfaces that can adapt to a wide range of use cases, without the need for manual UI design or coding. By empowering users to interact with technology using natural language, UI by inference represents the next frontier in user interface design.
I realize there are challenges, like layout templating and the like, but here is one way we can do it:
Provided we have integrated ChatGPT, take a prompt as input, parse it into object(s), nouns, adjectives and propositions using the Stanford NLP library, and then building a script that creates a memory-resident card (or a persistent one, depending on use case) for that object, then show it to the user for input, and finally build and pass the response back out to ChatGPT for further processing. Here is sample code, followed by a a few prompts. None of it has been tested.
import io.github.bensku.spacelang.SpaCyLang; import io.github.bensku.spacelang.api.Model; import io.github.bensku.spacelang.api.ModelPackage; import io.github.bensku.spacelang.api.Token; import io.github.bensku.spacelang.api.Tokenizer; import io.github.bensku.spacelang.impl.SpaCy; import io.github.bensku.spacelang.impl.model.BaseModelPackage; import com.defano.wyldcard.runtime.ExecutionContext; import com.defano.hypertalk.exception.HtException; import com.defano.hypertalk.exception.HtSemanticException;
import java.util.ArrayList; import java.util.List;
public class InferredPromptCard { private String prompt; private List fieldNames;
private ExecutionContext context;
} -------‐-------- unit test ------------- import com.defano.hypertalk.exception.HtException; import com.defano.wyldcard.runtime.ExecutionContext; import org.junit.jupiter.api.Test;
import java.util.Arrays; import java.util.List;
import static org.mockito.Mockito.*;
public class InferredPromptCardTest {
}