
corpus = """

Hey, how have you been doing lately? <eos>  
Hello, I’ve been doing quite well, thank you! I’ve just been really busy with my work and projects. <eos>  
That’s great to hear! What exactly has been keeping you occupied these days? <eos>  
I’m currently working on a big project at my job that has a very tight deadline, so it has been quite hectic. <eos>  
That sounds pretty stressful! How are you managing to handle all of that? <eos>  
Hi, I’m trying my best to stay organized and make sure I take breaks whenever I can to clear my mind. <eos>  
That’s a smart approach! Have you had any time to relax and unwind at all? <eos>  
Not really, to be honest, but I am planning to take a weekend trip soon to get away for a bit. <eos>  
That sounds like a fantastic idea! Where are you thinking of going for your trip? <eos>  
I’m considering going to the beach because I really need some sun and a nice sandy place to relax. <eos>  
That would be so refreshing! Do you have a favorite beach spot that you like to visit? <eos>  
Yes, I absolutely love going to the beach that’s near my hometown. It’s always been my favorite. <eos>  
Nice! How often do you get the chance to visit that beach? <eos>  
Not nearly as often as I would like, maybe just once a year if I’m lucky enough to find the time. <eos>  
I understand completely; life can get really busy sometimes. What do you usually enjoy doing when you’re there? <eos>  
I love swimming in the ocean, reading a good book, and just relaxing by the water while listening to the waves. <eos>  
That sounds absolutely perfect! Have you read any good books lately? <eos>  
Yes, I just finished reading a really good mystery novel that kept me guessing the whole time. <eos>  
I love a good mystery! What was the title of the book you just finished? <eos>  
The book was called "The Silent Patient," and I would highly recommend it to anyone who enjoys that genre. <eos>  
I’ve heard a lot of great things about that book! I’ll definitely add it to my reading list for sure. <eos>  
You really should! It has such an amazing twist at the end that you won’t see coming. <eos>  
I love a good twist in a story! Do you read often, or do you find it hard to make time for it? <eos>  
I try to read a little bit every night before I go to bed to help me relax. <eos>  
That’s a wonderful habit! I usually end up watching TV instead of reading. <eos>  
What shows are you currently watching? <eos>  
I’ve been really into a lot of crime dramas lately; they are always so captivating. <eos>  
Those shows can be really gripping! Do you have any favorites that you would recommend? <eos>  
I really enjoy "Mindhunter" and "True Detective." They are both so well done! <eos>  
Both of those are excellent shows! I’ve seen "Mindhunter," and I thought it was amazing! <eos>  
Right? The psychological aspects and character development are just fascinating to watch. <eos>  
I completely agree! Do you also like watching documentaries in your free time? <eos>  
Yes, I especially enjoy true crime documentaries; they always tell such interesting stories. <eos>  
Same here! There are so many fascinating documentaries out there to watch. <eos>  
Absolutely! I could easily binge-watch them for hours without getting bored. <eos>  
What was the last documentary you watched that you found particularly interesting? <eos>  
I recently watched "The Staircase," and it was so intense and thought-provoking. <eos>  
I’ve heard really good things about that documentary! I’ll definitely check it out when I have some time. <eos>  
You should absolutely do that! I think you won’t regret it at all; it’s very compelling. <eos>  
Thanks for the recommendation! I’m always looking for something new to watch. <eos>  
No problem! I’m glad to share. Let me know what you think once you watch it! <eos>  
Will do! It’s always nice to have someone to discuss these things with. <eos>  
I agree! It makes watching shows and reading books much more enjoyable when you can talk about them. <eos>  
Definitely! Let’s keep sharing recommendations. <eos>  
For sure! I’m looking forward to it!  <eos>
2 + 2 = 4 <eos>
3 + 5 = 8 <eos>
10 + 7 = 17 <eos>
20 - 5 = 15 <eos>
18 - 7 = 11 <eos>
50 - 25 = 25 <eos>
4 * 3 = 12 <eos>
6 * 6 = 36 <eos>
7 * 5 = 35 <eos>
8 * 4 = 32 <eos>
15 / 3 = 5 <eos>
12 / 4 = 3 <eos>
9 / 3 = 3 <eos>
25 + 30 = 55 <eos>
45 - 15 = 30 <eos>
100 / 10 = 10 <eos>
40 * 2 = 80 <eos>
81 / 9 = 9 <eos>
16 + 24 = 40 <eos>
9 * 7 = 63 <eos>
72 / 8 = 9 <eos>
13 + 14 = 27 <eos>
90 - 33 = 57 <eos>
22 * 3 = 66 <eos>
54 / 6 = 9 <eos>
27 + 32 = 59 <eos>
80 / 5 = 16 <eos>
11 * 9 = 99 <eos>
64 - 14 = 50 <eos>
5 + 8 = 13 <eos>
3 * 7 = 21 <eos>
49 / 7 = 7 <eos>
19 + 23 = 42 <eos>
77 - 28 = 49 <eos>
48 / 4 = 12 <eos>
9 * 8 = 72 <eos>
35 + 17 = 52 <eos>
88 / 8 = 11 <eos>
50 - 30 = 20 <eos>
4 * 11 = 44 <eos>
36 / 6 = 6 <eos>
5 + 10 = 15 <eos>
7 * 4 = 28 <eos>
72 - 18 = 54 <eos>
80 / 10 = 8 <eos>
60 * 2 = 120 <eos>
2 + 3 = 5 <eos>
25 - 10 = 15 <eos>
14 * 2 = 28 <eos>
28 / 4 = 7 <eos>
90 + 10 = 100 <eos>
56 - 36 = 20 <eos>
3 * 12 = 36 <eos>
45 / 5 = 9 <eos>
33 + 66 = 99 <eos>
100 - 50 = 50 <eos>
8 * 9 = 72 <eos>
10 / 2 = 5 <eos>
14 + 6 = 20 <eos>
21 - 9 = 12 <eos>
6 * 7 = 42 <eos>
35 / 5 = 7 <eos>
15 + 15 = 30 <eos>
8 - 3 = 5 <eos>
16 / 2 = 8 <eos>
50 * 1 = 50 <eos>
Hey, how have you been doing lately, {name}? <eos>
Hello, I’ve been doing quite well, thank you! Just been really busy with my work and projects. <eos>
That’s great to hear! What exactly has been keeping you occupied these days? <eos>
I’m currently working on a big project at my job that has a very tight deadline, so it has been quite hectic. <eos>
That sounds pretty stressful! How are you managing to handle all of that? <eos>
I’m trying my best to stay organized and make sure I take breaks whenever I can to clear my mind. <eos>
That’s a smart approach! Have you had any time to relax and unwind at all? <eos>
Not really, to be honest, but I am planning to take a weekend trip soon to get away for a bit. <eos>
That sounds like a fantastic idea! Where are you thinking of going for your trip, {username}? <eos>
I’m considering going to the beach because I really need some sun and a nice sandy place to relax. <eos>
That would be so refreshing! Do you have a favorite beach spot that you like to visit? <eos>
Yes, I absolutely love going to the beach that’s near my hometown. It’s always been my favorite. <eos>
Nice! How often do you get the chance to visit that beach? <eos>
Not nearly as often as I would like, maybe just once a year if I’m lucky enough to find the time. <eos>
I understand completely; life can get really busy sometimes. What do you usually enjoy doing when you’re there? <eos>
I love swimming in the ocean, reading a good book, and just relaxing by the water while listening to the waves. <eos>
That sounds absolutely perfect! Have you read any good books lately? <eos>
Yes, I just finished reading a really good mystery novel that kept me guessing the whole time. <eos>
I love a good mystery! What was the title of the book you just finished? <eos>
The book was called "The Silent Patient," and I would highly recommend it to anyone who enjoys that genre. <eos>
I’ve heard a lot of great things about that book! I’ll definitely add it to my reading list for sure. <eos>
You really should! It has such an amazing twist at the end that you won’t see coming. <eos>
I love a good twist in a story! Do you read often, or do you find it hard to make time for it? <eos>
I try to read a little bit every night before I go to bed to help me relax. <eos>
That’s a wonderful habit! I usually end up watching TV instead of reading. <eos>
What shows are you currently watching? <eos>
I’ve been really into a lot of crime dramas lately; they are always so captivating. <eos>
Hello! Are you available for a conversation at the moment? <eos>
Absolutely! I was just organizing my workspace, but I’m free to chat. What's on your mind? <eos>
Well, I was pondering some interesting subjects today, like artificial intelligence, global warming, and even ancient civilizations. <eos>
Those are certainly diverse topics! AI and climate change are very current, while ancient civilizations take us way back. Any reason they’re on your mind? <eos>
I read an article discussing how modern technology could help us better understand historical mysteries, like lost cities. It made me curious. <eos>
Fascinating! It’s amazing to think that drones, for example, can map unexplored jungles and reveal new archaeological sites. <eos>
Yes! Plus, machine learning models can analyze patterns in data from those sites, potentially leading to discoveries about past cultures. <eos>
Technology truly is changing our approach to history. Imagine the advancements we’ll see in just a decade! <eos>
Exactly. Speaking of the future, how do you feel about space exploration and the possibilities of finding life beyond Earth? <eos>
Oh, I’m thrilled about it! The concept of extraterrestrial life fascinates me. Did you hear about recent findings on potential biosignatures in distant planets? <eos>
Yes! Scientists recently observed something that might indicate biological processes in the atmosphere of an exoplanet. <eos>
Incredible! If confirmed, it could redefine our understanding of biology and life’s adaptability in the universe. <eos>
Agreed! It’s thrilling to imagine what might be out there, waiting to be discovered. <eos>
It’s moments like these that remind me of how vast and mysterious the universe is. <eos>
True. And it makes our everyday problems feel a bit smaller in comparison. <eos>
Absolutely. Speaking of everyday things, do you enjoy stargazing? <eos>
Yes! There’s something peaceful about looking up and seeing the stars, knowing they’ve been there for billions of years. <eos>
Couldn’t agree more. It’s like a reminder of continuity, that life just keeps going on, in its own way. <eos>
And we’re part of that vast timeline, just a tiny part of something much larger. <eos>
It's a humbling thought. <eos>
Indeed. <eos>
Well, this was a really enjoyable conversation. Thanks for chatting with me about all these interesting topics! <eos>
Absolutely, I had a great time too! Conversations like this really make you think. <eos>
Agreed! Let's do this again sometime soon. <eos>
Definitely! Take care, and have a wonderful rest of your day. <eos>
You too! Goodbye! <eos>
bye! nice talking to you! <eos>
23 + 12 = 35 <eos> 9 * 5 = 45 <eos> 60 - 22 = 38 <eos> 40 / 5 = 8 <eos>
18 * 2 = 36 <eos> 100 - 25 = 75 <eos> 56 / 7 = 8 <eos> 34 + 21 = 55 <eos>
7 * 6 = 42 <eos> 81 - 36 = 45 <eos> 72 / 6 = 12 <eos> 15 + 19 = 34 <eos>
90 / 9 = 10 <eos> 14 * 4 = 56 <eos> 45 + 25 = 70 <eos> 99 - 47 = 52 <eos>
11 * 8 = 88 <eos> 48 / 3 = 16 <eos> 13 + 16 = 29 <eos> 85 - 20 = 65 <eos>
22 / 2 = 11 <eos> 16 * 5 = 80 <eos> 30 + 45 = 75 <eos> 70 - 33 = 37 <eos>
40 * 3 = 120 <eos> 36 / 4 = 9 <eos> 29 + 32 = 61 <eos> 92 - 50 = 42 <eos>
13 * 7 = 91 <eos> 24 / 8 = 3 <eos> 15 + 25 = 40 <eos> 72 - 22 = 50 <eos>
5 * 9 = 45 <eos> 100 / 5 = 20 <eos> 66 + 12 = 78 <eos> 88 - 28 = 60 <eos>
45 * 2 = 90 <eos> 36 / 3 = 12 <eos> 19 + 8 = 27 <eos> 74 - 39 = 35 <eos>
55 / 5 = 11 <eos> 8 * 10 = 80 <eos> 64 + 9 = 73 <eos> 47 - 17 = 30 <eos>
(3 + 5) * 2 = 16 <eos> 4 ^ 3 = 64 <eos> (20 - 5) + 3 * 2 = 31 <eos> 8 % 3 = 2 <eos>
7! = 5040 <eos> 6 ^ 2 + (3 * 5) - 4 = 41 <eos> 15 * (2 + 3) - 4 ^ 2 = 51 <eos> 10! / (5! * 5!) = 252 <eos>
(12 + 8) / 4 + 5! = 125 <eos> (8 + 4) * 3 - 2 ^ 4 = 20 <eos> 30 % 7 = 2 <eos> 5 * (6 + 4) - 3 = 47 <eos>
9 + 3! * (4 ^ 2) = 57 <eos> 50 % 6 = 2 <eos> (25 + 5) * 2 ^ 3 = 240 <eos> (7 ^ 2) - (5 * 3) + 9 = 25 <eos>
9! / (3! * 6!) = 84 <eos> (8 * 4) + (3 ^ 3) - 10 = 41 <eos> 100 - 7 * 3 + 5! = 79 <eos> 15 * 3 + (7 - 2) ^ 2 = 64 <eos>
4! * 2 ^ 3 = 192 <eos> 8 ^ 2 - 5 * 7 + 20 / 4 = 35 <eos> (9 + 7) % 5 = 1 <eos> 11! / (8! * 3!) = 165 <eos>
(10 ^ 2) + (6 * 3) - 15 = 75 <eos> 5 + 7! / (4! * 3!) = 36 <eos> 48 % (7 + 2) = 3 <eos> (6 ^ 2) + 4! - 18 = 50 <eos>
12 ^ 2 - (3 * 5) + 20 = 139 <eos> 100 - (8 * 3) + 2 ^ 4 = 92 <eos> (7 ^ 3) - 5! = 293 <eos> (5 * 3!) + 4 ^ 2 = 56 <eos>
60 % 8 + (7 * 3) - 2 = 23 <eos> 9! / (4! * 5!) = 126 <eos> (16 / 4) + 6 ^ 2 - 10 = 32 <eos> 15 * (7 % 5) + 3 ^ 3 = 42 <eos>
(3 + 7) * (2 ^ 3) - 10 = 54 <eos> 5! + (4 * 3) - 2 ^ 3 = 118 <eos> (9 * 2 + 6) / 3 = 8 <eos> 13 % 5 + 7! = 5041 <eos>
8 ^ 2 - (5 * 3) + 4! = 101 <eos> (7 + 3) * (12 - 5) + 6 ^ 2 = 102 <eos> 100 / (5 * 2) + 3! = 26 <eos> (15 - 4) * 3 + 2 ^ 3 = 41 <eos>
(6 + 2) * (4! - 10) = 116 <eos> 14 % 3 + (7 * 5) - 4 ^ 2 = 29 <eos> 6! - 50 + (8 * 3) = 678 <eos> 8 * 5 + 2! * 3 ^ 2 = 64 <eos>
3 ^ 3 + (20 / 4) * 7 - 5 = 49 <eos> (30 - 5) % 4 * 3 + 6 ^ 2 = 37 <eos> 10! / (6! * 4!) + 3! = 216 <eos> 15 * 2 ^ 2 - 5! = -15 <eos>
(5 ^ 2) + (4 * 3!) - 20 = 38 <eos> 100 % (15 - 5) + 6 * 3 ^ 2 = 64 <eos> 7! / (5! * 2!) + 4 ^ 3 = 80 <eos> (9 ^ 2) - 7 * (3 + 5) = 37 <eos>
(12 + 8) * (3! - 4) = 80 <eos> 11! / (8! * 3!) + 2 ^ 4 = 181 <eos> 8 * 7 % 5 + 9! = 3628803 <eos> (25 - 5) * (7 - 2 ^ 2) = 75 <eos>
(4 ^ 3) - (6 * 5) + 7! / 3! = 164 <eos> 40 - (5 * 3) + 4! = 31 <eos> (10 + 20) * (5 - 3) ^ 2 = 120 <eos> 20! / (15! * 5!) = 15504 <eos>
(6 + 9) * 2 ^ (3 - 1) = 60 <eos> 100 % 9 + (6 * 3!) = 52 <eos> 11 ^ 2 - (4 * 8) + 2 ^ 5 = 85 <eos> (15 - 7) * (4 + 6) - 9 = 71 <eos>
(8 + 5) * (9 % 4) + 10! / (5! * 5!) = 67 <eos> 7 ^ 2 + (4 * 2) - 6! = -602 <eos> (6 + 5!) * (3 ^ 2 - 10) = 90 <eos> 



"""

import math
import re
import random


ModelName = 'AgGPT-4mini'
output_length = 15 # Set the maximum number of words to generate
creativity = 0.2  # Set the creativity level from 0 (rigid) to 1 (creative)
UserName = 'User' # put your name here


def mat_mul(A, B):
    result = []
    for i in range(len(A)):
        result.append([])
        for j in range(len(B[0])):
            result[i].append(sum(A[i][k] * B[k][j] for k in range(len(B))))
    return result

def softmax(x):
    exp_x = [math.exp(v - max(x)) for v in x]
    sum_exp_x = sum(exp_x)
    return [e / sum_exp_x for e in exp_x]

def self_attention(Q, K, V):
    scores = []
    for i in range(len(Q)):
        row = []
        for j in range(len(K)):
            score = sum(Q[i][idx] * K[j][idx] for idx in range(len(Q[i])))
            row.append(score)
        scores.append(row)

    attention_weights = [softmax(row) for row in scores]

    output = []
    for i in range(len(V)):
        weighted_sum = [sum(attention_weights[i][k] * V[k][j] for k in range(len(V)))
                        for j in range(len(V[0]))]
        output.append(weighted_sum)

    return output

def multi_head_attention(Q, K, V, num_heads):
    d_model = len(Q[0])
    head_size = d_model // num_heads
    outputs = []

    for head in range(num_heads):
        q_head = [row[head * head_size:(head + 1) * head_size] for row in Q]
        k_head = [row[head * head_size:(head + 1) * head_size] for row in K]
        v_head = [row[head * head_size:(head + 1) * head_size] for row in V]

        attention_output = self_attention(q_head, k_head, v_head)
        outputs.extend(attention_output)

    return outputs

def positional_encoding(seq_len, d_model):
    encoding = []
    for pos in range(seq_len):
        row = []
        for i in range(d_model):
            if i % 2 == 0:
                row.append(math.sin(pos / (10000 ** (i / d_model))))
            else:
                row.append(math.cos(pos / (10000 ** (i / d_model))))
        encoding.append(row)
    return encoding

def add_positional_encoding(embeddings, positional_encodings):
    return [[val + positional_encodings[i][j] for j, val in enumerate(row)]
            for i, row in enumerate(embeddings)]

def feed_forward_network(x):
    input_dim = len(x[0])
    hidden_dim = 4
    output_dim = 2
    W1 = [[1 if i == j else 0 for j in range(hidden_dim)] for i in range(input_dim)]
    b1 = [0] * hidden_dim
    W2 = [[1 for _ in range(output_dim)] for _ in range(hidden_dim)]
    b2 = [0] * output_dim
    hidden = [[max(0, sum(x[i][k] * W1[k][j] for k in range(len(W1))) + b1[j])
               for j in range(hidden_dim)] for i in range(len(x))]
    output = [[sum(hidden[i][k] * W2[k][j] for k in range(len(W2))) + b2[j]
               for j in range(output_dim)] for i in range(len(hidden))]
    return output

def tokenize(text):
    return re.sub(r'[.,!?]', '', text.lower()).split()

def embed_tokens(tokens):
    return [[random.random() for _ in range(3)] for _ in tokens]

def build_ngram_models(corpus):
    bigram_model = {}
    trigram_model = {}
    words = tokenize(corpus)

    for i in range(len(words) - 1):
        word1, word2 = words[i], words[i + 1]
        if word1 not in bigram_model:
            bigram_model[word1] = []
        bigram_model[word1].append(word2)

    for i in range(len(words) - 2):
        word1, word2, word3 = words[i], words[i + 1], words[i + 2]
        bigram = f"{word1} {word2}"
        if bigram not in trigram_model:
            trigram_model[bigram] = []
        trigram_model[bigram].append(word3)

    return {"bigram_model": bigram_model, "trigram_model": trigram_model}

def predict_next_word(text, models):
    bigram_model, trigram_model = models["bigram_model"], models["trigram_model"]
    words = tokenize(text)

    if not words:
        return ''

    if len(words) == 1:
        last_word = words[0]
        if last_word in bigram_model:
            next_words = bigram_model[last_word]
            return random.choice(next_words)
    elif len(words) >= 2:
        last_bigram = f"{words[-2]} {words[-1]}"
        if last_bigram in trigram_model:
            next_words = trigram_model[last_bigram]
            return random.choice(next_words)
        elif words[-1] in bigram_model:
            next_words = bigram_model[words[-1]]
            return random.choice(next_words)

    return ''

def predict_next_word_with_attention(text, ngram_models):
    bigram_model, trigram_model = ngram_models["bigram_model"], ngram_models["trigram_model"]
    tokens = tokenize(text)
    d_model = 3
    embeddings = embed_tokens(tokens)
    positional_encodings = positional_encoding(len(tokens), d_model)
    encoded_embeddings = add_positional_encoding(embeddings, positional_encodings)

    num_heads = 2
    attention_output = multi_head_attention(encoded_embeddings, encoded_embeddings, encoded_embeddings, num_heads)

    ff_output = feed_forward_network(attention_output)

    ngram_prediction = predict_next_word(text, ngram_models)
    return ngram_prediction

def clean_user_input(text):
    return re.sub(r'[<>,./;\'"\[\]{}|=_+`~!@#$%^&*()?\-]', '', text).strip().lower()

def print_progress(progress, total):
    percent = (progress / total) * 100
    bar_length = 40
    filled_length = int(bar_length * progress // total)
    bar = '=' * filled_length + '-' * (bar_length - filled_length)
    print(f'\r[{bar}] {percent:.2f}% Complete', end='')

def train_model(corpus):
    print('\nTraining for ' + ModelName + ' has begun.')
    cleaned_corpus = re.sub(r'[\r\n]+', ' ', corpus.strip())
    print_progress(0, 3)
    cleaned_corpus = re.sub(r'[.,!?]', '', cleaned_corpus)
    print_progress(1, 3)
    ngram_models = build_ngram_models(cleaned_corpus)
    print_progress(2, 3)
    print_progress(3, 3)
    print('\nTraining complete.')
    return ngram_models


def correct_text(text):
    text = text.strip()
    text = text[0].upper() + text[1:]
    
    if not re.search(r'[.!?]$', text):
        if re.search(r'\b(?:how|when|what|why|where|who|is|are|can|do|does|will|shall)\b', text, re.IGNORECASE):
            text += '?'
        else:
            text += '.'
    
    text = re.sub(r'(?<=\.\s)(\w)', lambda x: x.group().upper(), text)
    text = re.sub(r'\bi\b', 'I', text)
    text = re.sub(r'\b(i\'m|i\'ve|i\'d|i\'ll)\b', lambda x: x.group().capitalize(), text)
    
    return text

def is_sentence_complete(sentence, corpus):
    sentence = sentence.strip()
    if len(sentence) == 0:
        return False
    if 'eos' in sentence.lower() or '<eos>' in sentence.lower():
        return True
    return False

def predict_sentence_with_attention(input_text, ngram_models, output_length, creativity, recent_history_length=3):
    cleaned_input = clean_user_input(input_text)
    sentence = cleaned_input
    recent_history = []

    for _ in range(output_length):
        prediction = predict_next_word_with_attention(sentence, ngram_models)
        if not prediction:
            break
        if prediction in recent_history:
            continue  
        
        recent_history.append(prediction)
        if len(recent_history) > recent_history_length:
            recent_history.pop(0) 

        if random.random() < creativity: 
            sentence += ' ' + prediction
        else:
            next_words = ngram_models["bigram_model"].get(sentence.split()[-1], [])
            if next_words:
                most_likely_word = max(set(next_words), key=next_words.count)
                sentence += ' ' + most_likely_word
        
        if is_sentence_complete(sentence, corpus):
            break
    
    sentence = correct_text(sentence)

    if cleaned_input in sentence:
        sentence = sentence.replace(cleaned_input, '', 1).strip()
    
    return sentence


ngram_models = train_model(corpus)

while True:
    input_text = input('Type a message: ').strip()
    if input_text.lower() == 'exit':
        break
    predicted_sentence = predict_sentence_with_attention(input_text, ngram_models, output_length, creativity)
    predicted_sentence = predicted_sentence.replace('{name}', f'{ModelName}').strip()
    predicted_sentence = predicted_sentence.replace('{username}', f'{UserName}').strip()
    predicted_sentence = predicted_sentence.replace('<eos>', '').strip()

    print(f'{ModelName}:', predicted_sentence)
