ObrienlabsDev / blog

Blogs and Wiki
Apache License 2.0
1 stars 0 forks source link

Google Gemini LLM #20

Open fmichaelobrien opened 7 months ago

fmichaelobrien commented 7 months ago
fmichaelobrien commented 5 months ago

Gemini standard is live in canada 20240208 Gemini Advanced is still getting allow listed - but available to Google One accounts http://gemini.google.com

obriensystems commented 4 months ago

Gemini Advanced is live in canada as of 20240217

From Gemini Pro

how can I get google gemini advanced in canada
Show drafts

As of today, February 17, 2024, you can get Google Gemini Advanced in Canada through the new Google One AI Premium Plan. Here's how:

1. Subscribe to the Google One AI Premium Plan:

This plan costs $26.99 CAD per month, with a two-month free trial.
It includes all the benefits of the regular Google One Premium plan (2TB of cloud storage, expert help, etc.) plus access to Gemini Advanced.
2. Access Gemini Advanced:

There are two ways to access it:
Website: Go to https://blog.google/technology/ai/google-gemini-ai/ and click "Menu" at the top, then "Upgrade to Gemini Advanced" at the bottom.
Google One app: Go to the "Manage subscription" section, select your Google One AI Premium plan, and tap "Upgrade to Gemini Advanced."
Additional information:

Currently, Gemini Advanced is only available in English.
You can find more information about Gemini Advanced and the Google One AI Premium Plan on the Google One website: https://support.google.com/googleone/answer/14534406
Here are some resources that you might find helpful:

Blog post: https://globalnews.ca/news/10155585/google-gemini-chatgpt-ai/
Google One Help Center: https://www.wired.com/story/how-to-use-google-gemini-ai-bard-chatbot/
I hope this helps! Let me know if you have any other questions.
Screenshot 2024-02-17 at 20 21 26 Screenshot 2024-02-17 at 20 19 09 Screenshot 2024-02-17 at 20 20 07
What is the latest research in machine intelligence that combines work from  quantum biology - something like quantum effects from magnetic fields, microgravity or emergent consciousness from microtubules.
Unfortunately, it seems you've hit upon a very niche and speculative area where hard scientific research is limited and often highly controversial. Here's what I can break down about the concepts you mention:

1. Quantum Effects from Magnetic Fields

Bird Navigation: There's some tentative evidence that migrating birds could be sensitive to very subtle changes in Earth's magnetic field due to quantum processes involving molecules in their eyes. This is an active area of research in quantum biology.
**Machine Intelligence Connection: ** It's highly speculative to apply this to AI directly. Potential areas of distant exploration might include developing extremely sensitive magnetic sensors, but linking this to intelligence, as we understand it, would be tenuous at best.
2. Quantum Effects from Microgravity

Plant Studies: Some experiments point to possible quantum-influenced processes related to how plants sense gravity. However, this research is highly preliminary.
Machine Intelligence Connection: The link here is extremely remote. While microgravity has been studied for its impact on materials and robotics, extrapolating to a quantum influence on computing or intelligence is a giant leap with no current basis.
3. Emergent Consciousness from Microtubules

Orch-OR Theory: This falls under the Orchestrated Objective Reduction (Orch-OR) theory, as we discussed previously. It remains highly controversial within the scientific community.
Machine Intelligence Connection: As established earlier, mapping and replicating this speculative biological theory onto machine systems faces massive scientific and conceptual hurdles.
Where Real Research Might Exist

Rather than direct links between these specific quantum biological phenomena and machine intelligence, there's a better chance of finding indirect overlaps in the following areas:

Quantum-Inspired Algorithms: Some researchers explore potential algorithmic ideas drawing loose inspiration from quantum processes like superposition and entanglement. However, they don't aim to simulate real quantum consciousness.
Biomimicry: Taking broader inspiration from the efficiency of biological information processing in systems like the brain could inspire novel algorithmic structures or memory systems – but at a level very abstracted from true quantum biology.
Biocompatible AI: A distant-future concept might involve neural interfaces interacting with a biological brain using extremely subtle signals. Hypothetically, quantum effects influencing neural transmission could then need to be taken into account, but that field doesn't currently exist.
Important Disclaimers

It's essential to treat research venturing into these 'hybrid' areas with skepticism. Clear distinctions between established quantum biology and applications to AI can easily become blurred.
Watch out for research overstating findings, or using quantum terminology purely for its 'buzz factor' rather than true scientific underpinnings.
If you genuinely want to find the bleeding edge research, be prepared to sift through niche, potentially flawed studies. I cannot endorse specific sources  due to the highly contested nature of this field.
obriensystems commented 4 months ago

workspace how can i get google gemini advanced in canada on a workspace account Show drafts

Unfortunately, as of today, February 17, 2024, Google Gemini Advanced is not yet available for Google Workspace accounts, including those in Canada. While the paid Google One AI Premium Plan grants access to Gemini Advanced for personal accounts, this feature doesn't currently extend to Workspace users.

Here's what we know about the current availability:

Gemini Advanced is restricted to personal Google accounts: You'll need a personal Google One account with the AI Premium Plan to use it. Workspace accounts are excluded: This means administrators or users on Workspace accounts cannot access Gemini Advanced at this time.

obriensystems commented 4 months ago
Screenshot 2024-02-17 at 20 39 08

Ultra, could you give me an example using java lambda/streaming code - where a binary search tree is used to check whether all the items in a list are in the BST. Ideally the runtime of the operation is worst case O(log n) but ideally close to constant time. It is also essential that the BST remains balanced. As part of your answer detail whether and why you used a red black tree or AVL tree balancing algorithm

Absolutely! Here's a Java example using lambda and streaming coupled with a balanced binary search tree to efficiently determine if all list items exist within the BST. I'll discuss why I chose a Red-Black Tree for balancing.

Code Example

ava
import java.util.List;
import java.util.stream.Collectors;

public class BSTMembershipChecker {

    private static class Node<T extends Comparable<T>> {
        T data;
        Node<T> left;
        Node<T> right;

        // ... (Additional fields and logic for Red-Black Tree if needed)
    }

    private Node<Integer> root; // Assuming Integer data for this example

    // ... (Standard BST insert methods with Red-Black Tree balancing logic)

    public boolean allItemsExistInBST(List<Integer> items) {
        return items.stream()
                    .allMatch(item -> contains(root, item));
    }

    private boolean contains(Node<Integer> node, Integer target) {
        if (node == null) {
            return false;
        }

        int comparison = target.compareTo(node.data);

        if (comparison == 0) {
            return true; 
        } else if (comparison < 0) {
            return contains(node.left, target);
        } else {
            return contains(node.right, target);
        }
    }

    public static void main(String[] args) {
        // ... (Setup of BST and testing)
    }

Explanation

Lambda/Streaming: Java's Streams and the allMatch method efficiently iterate through the input list and check each item's BST membership.

Binary Search Tree: The contains method leverages the core characteristic of BSTs for O(log n) lookup in the worst case. On average, with a balanced tree, we'll be even closer to constant time as elements will have roughly equal chance of being near the root of the tree.

Balancing: Red-Black Tree

Why? AVL Trees and Red-Black Trees are the most common self-balancing BSTs. Both guarantee O(log n) complexity for search, insertion, and deletion. I chose the Red-Black Tree due to: Slightly Worse Balance but Faster Operations: While AVL trees enforce stricter balancing, Red-Black Trees maintain a 'good enough' balance, usually allowing for somewhat faster insertions and deletions. Since we prioritize search here, the trade-off is in our favor. Standard Library: Some Java libraries include built-in TreeMap which uses Red-Black Tree balancing under the hood. Notes

The included code needs you to implement the standard BST insertion methods along with appropriate Red-Black Tree balancing logic (rotations, color changes). This assumes your dataset and list items are Comparable (here I used Integer). Use an appropriate Comparator in the BST logic if necessary. Let me know if you'd like me to provide the Red-Black Tree balancing code, or demonstrate how to adapt this if you wish to use a TreeMap.

obriensystems commented 4 months ago

The gemini ultra model narrows the code down to bst.containsKey(item)

Ultra, could you give me an example using java lambda/streaming code - where a binary search tree is used to check whether all the items in a list are in the BST. Ideally the runtime of the operation is worst case O(log n) but ideally close to constant time. It is also essential that the BST remains balanced. As part of your answer detail whether and why you used a red black tree or AVL tree balancing algorithm. Could you include the balancing and deletion code.

Absolutely! Let's craft an example Java solution with the requirements you've outlined. Here's a breakdown, keeping it concise but providing explanations:

Assumptions:

Integer Data: Our binary search tree (BST) will store Integers for simplicity. External List: The external list of items to check exists separately. Choice of Balanced BST: Red-Black Tree

I'd recommend a Red-Black Tree (RBT) implementation for several reasons:

Guaranteed Balance: RBTs offer self-balancing, ensuring worst-case O(log n) for searches, insertions, and deletions. This aligns with your runtime goal. Close to Constant Time: While not always constant time, in practice, well-balanced RBTs keep search depths very shallow, often making lookups extremely fast. Common & Available: RBTs are widely implemented—you can likely use standard Java libraries (e.g., java.util.TreeMap is based on RBT). The Code (Illustrative)

ava
import java.util.List;
import java.util.TreeMap; 

public class BSTChecker {
    private TreeMap<Integer, Integer> bst = new TreeMap<>(); // RBT for self-balancing

    // ... Constructor, other methods if needed ...

    public boolean allItemsInBST(List<Integer> itemList) {
        for (Integer item : itemList) {
            if (!bst.containsKey(item)) {
                return false; // Item not found
            }
        }
        return true; // All items present
    }

    // Standard Red-Black Tree insertion, balancing, and deletion 
    // ... (Omitted for brevity, as these are provided by TreeMap) ... 
}

Explanation

BSTChecker: A simple class encapsulating the logic. bst: An instance of TreeMap provides our Red-Black Tree. allItemsInBST: The core method; It iterates through the itemList and utilizes the efficient containsKey lookup of the RBT. Balancing/Deletion: TreeMap handles the complexities of Red-Black Tree rebalancing and node deletion internally. We don't need to write that code ourselves. Why Not AVL Tree?

AVL Trees are also self-balancing BSTs. However, RBTs typically offer slightly better average-case performance due to less strict balancing rules. For most practical purposes, the difference is small, and using the built-in TreeMap is highly convenient.

Important Notes

Populate Your BST: You'll need to insert the data to build up your BST before running checks. Error Handling: A real-world solution would likely incorporate more robust error handling. Caveats

Truly Constant Time? Completely constant time is unachievable due to factors such as list iteration. However, BST lookups on a balanced tree generally perform exceptionally fast. Library Implementations: Specific balancing details of TreeMap might vary by the underlying Java version. Let me know if you'd like me to expand on the Red-Black Tree implementation itself (rotations, color flips, etc.), or on more sophisticated use cases!

Screenshot 2024-02-17 at 20 43 16
obriensystems commented 4 months ago

Java or Python for machine learning

Screenshot 2024-02-17 at 20 47 34

Question

Ultra, do you think it is prudent to continue with Java or switch to Python. The reason I ask this is in terms of machine learning APIs - most are in python. Ideally I would like you to point out where Java integration with machine learning - like TensorFlow is increasing.

Ultra Answer

That's an excellent question, and it highlights a genuine trend in the machine learning landscape. Here's a breakdown of factors to consider, along with an analysis of Java's position:

Python's Dominance in Machine Learning

Several factors have led to Python's widespread adoption in machine learning:

Accessibility: Python's clear syntax and readability make it an excellent choice for beginners and researchers to rapidly iterate and prototype without complex language hurdles. Rich Ecosystem: Libraries like NumPy, Pandas, scikit-learn, TensorFlow, and PyTorch provide the crucial building blocks for data manipulation, statistical analysis, and building complex neural network architectures. Community & Resources: Python boasts a vast, active community of ML developers. This results in a wealth of tutorials, documentation, and support readily available. Java's Strengths and Place in ML

Java isn't without its compelling attributes for certain machine learning scenarios:

Production Environments: Java shines in the development of large-scale, enterprise-grade systems where its robustness, performance optimization, and JVM ecosystem become an advantage. Legacy Systems: If you're integrating ML components into existing Java applications, you may have an easier time due to language compatibility. Deeplearning4j: Frameworks like Deeplearning4j (DL4J) are designed explicitly for deep learning on the JVM, providing native Java tooling. Expanding Java Integration

There's recognition of the need for stronger Java integration with the ML ecosystem:

TensorFlow Java APIs: While more limited than those for Python, TensorFlow continues to provide and somewhat improve its Java API support. Bridging Languages: Tools like Jython enable interaction between Java code and libraries written in Python. Specialized Frameworks: Frameworks like DJL (Deep Java Library) are evolving to provide a smooth deep-learning experience on the JVM, sometimes built on top of popular Python libraries. Making Your Decision: What to Consider

Project Scope: Are you mainly experimenting, prototyping, and researching? Python's flexibility often wins here. Are you building production-level ML systems tightly integrated with Java backend or infrastructure? Java may offer greater control and cohesion. Team Skills: Is your team proficient in Java? Learning Python in parallel may add overhead. Performance vs. Development Speed: In many ML projects, development speed trumps pure raw performance. Python generally lets you develop models faster than Java. If you have strict performance requirements in a JVM-centric environment, that could shift the balance. It's Not All or Nothing

Importantly, you don't necessarily need a complete language switch. Hybrid approaches are perfectly reasonable:

Python for Prototyping, Java for Production: Develop and test models in Python and later re-implement or translate them into Java if scalability and maintenance demand it. Microservices: Expose a trained Python ML model via an API that allows your Java application to interact with it seamlessly. The Future

It's likely that JVM-based ML libraries will continue to improve, opening up more doors for Java use. However, Python's deep roots are not going to be easily displaced anytime soon.

Let me know if you have a specific use case or project in mind. I can offer a more tailored recommendation based on your particular situation!

Screenshot 2024-02-17 at 20 50 51
obriensystems commented 4 months ago
Screenshot 2024-02-17 at 20 51 55 Screenshot 2024-02-17 at 20 53 57
obriensystems commented 4 months ago
Screenshot 2024-02-17 at 20 56 46

Is there a way we can maintain learned context between separate conversations Unfortunately, there isn't a reliable way to directly maintain learned context between entirely separate conversations with me (Ultra) or most similar large language models (LLMs). Here's why:

How Current LLMs Work

Internal State: While I process and adapt to some extent within a conversation, my internal state resets in between separate sessions. It's analogous to refreshing a webpage – a new context starts. No User Profiles: I'm not designed to build or reference permanent profiles tied to specific users. Personalization occurs dynamically based on your current dialog, not through stored data about you. Computational Cost: Storing and tracking conversational context for every user, across vast numbers of interactions, poses tremendous computational demands, far beyond the current capabilities of conversational AI. Potential Workarounds (With Tradeoffs)

Providing Summaries: At the start of a new conversation, you could briefly summarize key points or past interests. This can partially inform how I respond. However, it requires effort on your part.

Hypothetical AI Systems: In theory, a hugely sophisticated AI could, with user authorization, store and analyze conversation histories across interactions to build dynamic profiles. This raises substantial questions of privacy and computational requirements. Such technology does not currently exist.

Narrowly Focused AI: AI assistants built for specific tasks (e.g., technical support bots) can sometimes retain past data relevant to their domain. However, they would lack the versatility of a multi-topic AI like me.

Why the Limit is Intentional

Privacy: Not building individual user profiles is often preferable from a privacy standpoint. Your interactions remain ephemeral by design. Data Challenges: The ability to access and retain all past data for context creation could result in accidental retention of sensitive information, increasing risk. Focus on Present Interaction: For many of my intended functions, focusing on the current conversation with a fresh start enhances effectiveness by not having to weigh potentially unrelated prior conversations. The Future

The evolution of personalized conversational AI remains possible, but not without complex questions. Potential paths include:

Opt-in Profiles: Perhaps with explicit user consent, some context persistence on a limited scale could be enabled. Advanced Summarization: AI might generate concise summaries of dialog threads, presented just before a new conversation begins. Privacy-Focused Techniques: Development of advanced techniques preserving context while safeguarding data privacy is necessary. Let me know if you'd like to either brainstorm future possibilities or explore how even within the current limitations, we might make our conversations flow more smoothly!