Machine Learning Brilliance

Sharing the Magic of Machine Learning and Artificial Intelligence

Menu
  • My Portfolio
Menu

Experimenting with Federated Q-Learning in a Maze Environment

Posted on February 3, 2025February 3, 2025 by Nick Sudh

Over the weekend, I decided to play around with a simplified version of a federated Q-learning algorithm. Inspired by recent research on Fed-DVR-Q, I built a small simulation where multiple agents learn to navigate a maze using Q-learning. The twist? The agents share their learned Q-tables periodically—mimicking intermittent communication in a federated learning setup.

In this post, I’ll walk you through the code step by step, explain each block in detail, and describe the outputs I observed.


The maze is defined as a 5×5 grid where: – 0 represents a free cell. 1 represents a wall. The starting cell is (0, 0) and the goal (treasure) is at (4, 4). An agent receives a reward of –1 for each step (to encourage shorter paths) and +10 when it reaches the goal.

1. Defining the Maze Environment

python
# ---------- Maze Environment Definition ----------
import numpy as np
import matplotlib.pyplot as plt
import random
import time

# Our maze: a 5x5 grid where 0 is free and 1 is a wall.
maze = np.array([
    [0, 1, 0, 0, 0],
    [0, 1, 0, 1, 0],
    [0, 0, 0, 1, 0],
    [0, 1, 1, 1, 0],
    [0, 0, 0, 0, 0]
])
rows, cols = maze.shape
start = (0, 0)   # starting position (row, col)
goal = (4, 4)    # treasure (goal) position

I set up the maze as a NumPy array. The variables rows and cols capture the maze dimensions, and I designate the start and goal positions.

Next, I define the possible actions (up, down, left, right) and a function that simulates the environment’s response when an action is taken:

# Define possible actions as changes in (row, col):
actions = {
    0: (-1, 0),  # up: move to the previous row.
    1: (1, 0),   # down: move to the next row.
    2: (0, -1),  # left: move to the previous column.
    3: (0, 1)    # right: move to the next column.
}

def maze_step(state, action, maze):
    """
    Given a state (row, col) and an action index, compute the next state.
    If the agent tries to move out of bounds or into a wall, it stays in place.
    
    Returns:
        new_state (tuple): Updated (row, col) position.
        reward (int): -1 for each move, +10 when the goal is reached.
        done (bool): True if the goal is reached; otherwise False.
    """
    delta = actions[action]
    new_state = (state[0] + delta[0], state[1] + delta[1])
    # Check if the new state is out-of-bounds or a wall:
    if (new_state[0] < 0 or new_state[0] >= rows or
        new_state[1] < 0 or new_state[1] >= cols or
        maze[new_state] == 1):
        new_state = state  # Invalid move; remain in the same cell.
    # Reward: -1 per step, +10 if the agent reaches the goal.
    reward = 10 if new_state == goal else -1
    done = (new_state == goal)
    return new_state, reward, done

For each action, the function computes the new state by adding a delta. If the new cell is a wall or outside the maze, the agent remains in its current cell. The reward is –1 per step (to penalize long paths) and +10 at the goal.

2. Federated Q-Learning Setup

I simulate a federated learning scenario with 5 agents. Each agent has its own Q-table (a 3D array indexed by [row, col, action]). After every fixed number of episodes (a communication round), the agents average their Q-tables to create a shared, global Q-table.

# ---------- Federated Q-Learning Parameters ----------
num_agents = 5            # Number of agents exploring the maze.
num_actions = 4           # Four possible actions.
gamma = 0.9               # Discount factor for future rewards.
alpha = 0.1               # Learning rate.
epsilon = 0.1             # Exploration rate (epsilon-greedy).

comm_interval = 10        # Communication round: average Q-tables every 10 episodes.
total_episodes = 30       # Total episodes per agent.

# Initialize the global Q-table as zeros, with shape (rows, cols, num_actions).
global_Q = np.zeros((rows, cols, num_actions))
# Each agent starts with a copy of the global Q-table.
local_Qs = [global_Q.copy() for _ in range(num_agents)]

I define parameters for the Q-learning process (e.g., discount factor, learning rate, epsilon). Each agent’s Q-table is initialized to zeros. Later, after every 10 episodes, I average these tables.

3. Visualization Setup

I use matplotlib to visualize the maze and animate the agents. The maze is displayed as an image, and I overlay text labels for the start (‘S’) and goal (‘G’). Each agent is represented by a colored marker.

# ---------- Visualization Setup ----------
plt.ion()  # Enable interactive mode.
fig, ax = plt.subplots(figsize=(6, 6))
ax.imshow(maze, cmap='gray_r')  # Display the maze; free cells are white.
ax.set_title("Agents in Maze")
ax.set_xticks(np.arange(cols))
ax.set_yticks(np.arange(rows))
# Mark start and goal positions:
ax.text(start[1], start[0], 'S', ha='center', va='center', color='blue', fontsize=14)
ax.text(goal[1], goal[0], 'G', ha='center', va='center', color='red', fontsize=14)

# Create a marker for each agent with distinct colors.
agent_markers = []
colors = ['blue', 'green', 'orange', 'purple', 'cyan']
for i in range(num_agents):
    # Note: imshow uses (x, y) coordinates, where x=column and y=row.
    marker, = ax.plot([start[1]], [start[0]], 'o', color=colors[i % len(colors)], markersize=12)
    agent_markers.append(marker)

plt.draw()
plt.pause(1.0)  # Pause to allow the initial plot to render.

The maze is rendered using imshow. I add text labels for clarity and create a marker (using a colored circle) for each agent. The call to plt.ion() enables interactive plotting.

4. Simulation Loop and Federated Communication

Now comes the heart of the simulation. Each agent runs an episode from the start to the goal using an epsilon-greedy Q-learning policy. As the agent moves, its position is updated on the plot. After every comm_interval episodes, I average the local Q-tables to simulate a communication round.

# ---------- Simulation Loop ----------
for ep in range(total_episodes):
    for agent in range(num_agents):
        state = start
        path = [state]  # Record the agent's path.
        done = False
        while not done:
            # Epsilon-greedy: random action with probability epsilon.
            if random.random() < epsilon:
                action = random.randint(0, num_actions - 1)
            else:
                action = np.argmax(local_Qs[agent][state[0], state[1]])
            
            next_state, reward, done = maze_step(state, action, maze)
            best_next = np.max(local_Qs[agent][next_state[0], next_state[1]])
            # Q-learning update: adjust Q-value based on the received reward.
            local_Qs[agent][state[0], state[1], action] += alpha * (
                reward + gamma * best_next - local_Qs[agent][state[0], state[1], action]
            )
            state = next_state
            path.append(state)
            
            # Update the visualization: reposition the agent marker.
            # For plotting, x corresponds to the column and y to the row.
            agent_markers[agent].set_data([state[1]], [state[0]])
            ax.set_title(f"Episode {ep+1}, Agent {agent+1} is moving...")
            fig.canvas.draw_idle()
            fig.canvas.flush_events()
            plt.pause(0.2)  # Pause briefly for the animation effect.
        print(f"Episode {ep+1}, Agent {agent+1} path: {path}")
    
    # After every 'comm_interval' episodes, average the Q-tables (federated communication).
    if (ep + 1) % comm_interval == 0:
        avg_Q = np.mean(np.array(local_Qs), axis=0)
        global_Q = avg_Q.copy()
        # Update each agent's Q-table to the new global Q.
        local_Qs = [global_Q.copy() for _ in range(num_agents)]
        print(f"\nAfter episode {ep+1} (Communication Round): Global Q-table averaged.")
        ax.set_title(f"After Episode {ep+1} Communication Round - Global Q Updated")
        fig.canvas.draw_idle()
        fig.canvas.flush_events()
        plt.pause(1.0)

plt.ioff()
plt.show()

Episode Loop:
For each episode, every agent starts at the initial state and selects actions via an epsilon-greedy strategy.

Q-learning Update:
The Q-value for the current state–action pair is updated using the classic formula.

Visualization:
I update each agent’s marker with set_data([state[1]], [state[0]]) (wrapping the coordinates in lists as required). Then I force a canvas redraw with fig.canvas.draw_idle() and flush_events() to ensure the movement is visible.

Federated Communication:
Every 10 episodes, I average the local Q-tables to create a new global Q-table, and then update each agent’s table accordingly. This step is analogous to the intermittent communication in federated Q-learning.

5. Testing the Learned Policy

After training, I test the global Q-table by having an agent run from the start using the learned policy. The optimal path is printed at the end.

# ---------- Testing the Learned Policy ----------
state = start
optimal_path = [state]
while state != goal:
    action = np.argmax(global_Q[state[0], state[1]])
    state, _, _ = maze_step(state, action, maze)
    optimal_path.append(state)
print("\nLearned optimal path in the maze:", optimal_path)

Starting from (0, 0), the code picks the action with the highest Q-value at each state to trace out the optimal path. For example, you might see output like:

Episode 2, Agent 5 path: [(0, 0), (0, 0), (0, 0), (0, 0), (1, 0), ... (4, 4)]
Episode 3, Agent 1 path: [(0, 0), (0, 0), (0, 0), (1, 0), ... (4, 4)]
...
Learned optimal path in the maze: [(0, 0), (0, 1), (1, 1), ... (4, 4)]

The output shows each agent’s path in each episode. In early episodes, the paths can be very noisy (e.g., repeated positions or loops) because the agents are still exploring. Over time, with the help of federated updates, the agents converge to a better policy, and the optimal path becomes clearer.

6. Reflections and Observations

  • Learning Dynamics:
    In my logs, you may see output such as:
Episode 2, Agent 5 path: [(0, 0), (0, 0), (0, 0), (0, 0), (1, 0), (1, 0), ... (4, 4)]
Episode 3, Agent 1 path: [(0, 0), (0, 0), (0, 0), (1, 0), (1, 0), ... (4, 4)]

These paths indicate that, initially, agents sometimes get stuck (repeating (0,0)) or take non-optimal moves. After a few communication rounds, the global Q-table improves, and agents learn to navigate the maze more effectively.

Federated Benefit:
The averaging of Q-tables (federated communication) is key—it allows all agents to benefit from each other’s experience, reducing variance in their Q-value estimates and speeding up convergence.

Animation:
The real-time animation (with plt.pause, draw_idle, and flush_events) lets you visually observe the agents “wandering” in the maze. As the episodes progress, you should notice the movement patterns become more directed toward the goal.

Full code at:
mlbrilliance/Federated_Q_Learning_Maze_Simulation

Self-Reflecting AI Agents: A Deep Dive into Current Research, Benefits, Risks, and Ethical Considerations

Posted on December 28, 2024December 28, 2024 by Nick Sudh

The evolution of artificial intelligence has brought us to an exciting frontier: self-reflecting AI agents. These sophisticated systems are revolutionizing how AI learns and adapts, marking a significant leap forward from traditional, rule-based approaches. As an AI engineer, understanding this emerging technology is crucial for staying at the forefront of innovation.

Current State of Research

The field of self-reflecting AI agents is advancing rapidly, driven by the need for more adaptable and intelligent systems. Unlike conventional AI models that follow fixed patterns, self-reflecting agents can analyze their own performance, learn from mistakes, and modify their behavior accordingly.

Recent research at Stanford’s AI lab has demonstrated promising results with “curious replay” systems, where agents review and learn from their most interesting experiences, similar to how human brains process information during sleep. Meanwhile, organizations across industries are implementing various levels of AI agents, from basic rule-based systems to sophisticated multi-agent architectures handling complex tasks like supply chain optimization and financial planning.

Real-World Applications

Self-reflecting AI agents are already making their mark in practical applications. For instance, in customer service, these agents can analyze their interactions, identify patterns in successful responses, and continuously improve their communication strategies. In software development, they’re being used to debug code more effectively by learning from previous error patterns.

Potential Benefits

The advantages of self-reflecting AI agents are substantial:

  • Enhanced problem-solving capabilities through continuous learning
  • Improved accuracy and reliability in decision-making
  • Greater adaptability to changing conditions
  • More personalized user interactions
  • Increased operational efficiency

For example, in healthcare, self-reflecting agents can analyze patient interactions, identify potential diagnosis errors, and refine their assessment protocols over time.

Potential Risks

However, these advances come with important considerations:

  • Technical limitations and potential system failures
  • Privacy concerns regarding data handling
  • Risk of unintended biases in decision-making
  • Possible negative impacts on human-AI interaction dynamics

Ethical Considerations

The development of self-reflecting AI agents raises crucial ethical questions:

  1. Transparency and Accountability
    How can we ensure these systems remain transparent while becoming more complex? We must develop robust frameworks for monitoring and understanding their decision-making processes.
  2. Bias and Fairness
    Self-reflecting agents must be designed to recognize and correct their own biases, ensuring fair treatment across all user groups.
  3. Privacy Protection
    As these agents collect and analyze more data, implementing strong privacy safeguards becomes increasingly important.

Approaches to Development

Current development approaches include:

  • Experience replay mechanisms
  • LangGraph implementation for reflection loops
  • Advanced prompt engineering techniques
  • Skill harvesting methodologies

Each approach offers unique advantages and challenges, requiring careful consideration of the specific use case.

Future Outlook

The future of self-reflecting AI agents looks promising but requires careful navigation. Key areas for future development include:

  • More sophisticated reflection mechanisms
  • Better integration with human oversight
  • Enhanced ethical frameworks
  • Improved transparency in decision-making processes

Conclusion

Self-reflecting AI agents represent a significant step forward in artificial intelligence, offering unprecedented opportunities for creating more intelligent, adaptable, and responsible AI systems. However, success in this field requires balancing technological advancement with ethical considerations and risk management.

As AI engineers, our role is to guide this development responsibly, ensuring that these powerful tools serve humanity’s best interests while mitigating potential risks. The journey ahead is challenging but exciting, promising to reshape how we think about artificial intelligence and its role in our future.

The key to success lies in maintaining a balanced approach: pushing the boundaries of what’s possible while remaining mindful of the ethical implications and potential risks. As we continue to develop these systems, our focus must remain on creating AI agents that are not just intelligent, but also transparent, fair, and beneficial to society as a whole.

The Future of Web Development: AI-Human Collaboration

The Future of Web Development: AI-Human Collaboration

Posted on December 20, 2024December 21, 2024 by Nick Sudh

The landscape of web development is dramatically transforming, driven by the rise of artificial intelligence (AI). While AI coding tools capture headlines, AI-human collaboration is the true revolution. This new paradigm empowers developers to work alongside AI, leveraging each other’s strengths to build websites and applications faster and more efficiently than ever. This shift from AI replacing developers to AI empowering developers is key to understanding the future of web development1.

The Rise of AI in Web Development

AI is rapidly changing how we approach web development. AI-powered tools can now generate code, design user interfaces, and even optimize website performance. This automation frees developers from tedious tasks, allowing them to focus on more creative and strategic aspects of their work.

Several AI tools are leading this charge:

  • Cursor AI: This AI-powered code editor provides intelligent code suggestions, assists with debugging, and even generates code from natural language descriptions2. Cursor AI also boasts several advanced features:
  • Image Input for UI Design: Cursor AI can handle image inputs. For example, a developer could sketch a UI design and ask Cursor AI to generate the HTML and CSS code for it2.
  • Documentation References: Cursor AI allows developers to add documentation references, which is particularly useful for lesser-known or private libraries2.
  • .cursorignore File: Similar to a .gitignore file, developers can use a .cursorignore file to exclude specific files or directories from being indexed by Cursor AI, improving performance and focus4.
  • AI Review: This feature provides real-time code review, catching potential bugs before they make it to production4.
  • Key Features: Cursor AI is particularly known for its “pair programming” capabilities, allowing it to work alongside developers by suggesting relevant code, identifying issues, and improving code structure. It supports multiple programming languages and integrates with most major IDEs3.
  • V0: This generative UI tool allows developers to create user interfaces by simply describing their ideas in plain language. V0 then generates the necessary code, using open-source tools and frameworks like React and Tailwind CSS, and it can also integrate with component libraries like Shadcn UI5. V0 also offers several powerful features:
  • Figma Integration: Developers can import and generate working applications from their Figma designs6.
  • AI-Powered UI Enhancement: V0 uses AI to enhance UI by analyzing user input and generating code for various elements, styles, and layouts7.
  • Subscription Plans: V0 offers a free plan and three paid plans to cater to different needs and budgets5.
  • Shadcn UI Integration: V0 can be used with Shadcn UI, a component library that takes an “à la carte” approach, allowing developers to copy and paste only the code for the specific components they need. This flexibility allows for easier customization8.
  • 3D Graphics Generation: V0 can generate 3D graphics, particularly with react-three-fiber, expanding the possibilities of UI design9.
  • Coolors Integration: V0 can be used with Coolors to generate cohesive color palettes, making it easier to create visually appealing and consistent UIs9.

These tools, and others like them, are not meant to replace human developers. Instead, they are designed to augment their capabilities, allowing them to work more efficiently and effectively.

The Power of Collaboration

The true potential of AI in web development lies in collaboration. By combining the strengths of humans and AI, we can achieve a new level of productivity and innovation.

Here’s how this collaboration can work:

  • AI handles the heavy lifting: AI can automate repetitive tasks, such as generating boilerplate code, optimizing images for faster loading times, and testing for bugs10. This frees up developers to focus on higher-level tasks, such as designing user experiences, planning architecture, and ensuring code quality11.
  • Humans provide creativity and oversight: While AI can generate code and designs, it still lacks the creativity and critical thinking skills of humans12. Developers can guide the AI, refine its output, and ensure that the final product meets the desired standards.
  • Collaboration fosters innovation: By working together, humans and AI can push the boundaries of web development. AI can provide new insights and possibilities, while humans can use their expertise to turn those possibilities into reality13.

Challenges of AI-Human Collaboration

While the benefits of AI-human collaboration are numerous, it’s important to acknowledge the challenges that come with this new approach. These include:

  • Integration with existing workflows: Integrating AI tools into existing development workflows can be challenging. Developers need to learn how to use these tools effectively and adapt their processes accordingly.
  • Ensuring accuracy and reliability: AI tools are still under development, and their output may not always be accurate or reliable. Human oversight is essential to ensure quality and prevent errors.
  • Challenges with pre-built components: While AI tools like V0 can integrate with component libraries like Shadcn UI, there can be challenges in forcing pre-built components into specific designs14.
  • Ethical considerations: As AI becomes more prevalent in web development, it’s important to consider the ethical implications. For example, how do we ensure that AI tools are not used to create biased or discriminatory websites and applications?

Benefits of AI-Human Collaboration

This collaborative approach to web development offers numerous benefits:

  • Increased efficiency: AI can automate tasks, allowing developers to complete projects faster15.
  • Improved quality: AI can help identify and fix bugs, leading to more robust and reliable websites and applications15.
  • Enhanced creativity: By freeing developers from tedious tasks, AI allows them to focus on more creative aspects of their work11.
  • Greater accessibility: AI tools can make web development more accessible to non-developers, empowering more people to build their own websites and applications1. This democratizing effect of AI has the potential to revolutionize how we approach web development, making it more inclusive and empowering individuals and businesses with limited technical expertise.
  • Enhanced Security: AI significantly contributes to improving web security by detecting vulnerabilities and potential threats in real-time16.
  • SEO Optimization: AI can aid in SEO optimization by providing design suggestions, creating layouts, and even generating complete designs based on user input17.
  • Pricing and Accessibility: AI tools like Cursor AI, V0, and Bolt.new offer free tiers to help users get started. They also have different pricing models, including individual plans, team options, and credit-based systems, making them accessible to a wide range of users18.

Tips for Effective AI-Human Collaboration

To maximize the benefits of AI-human collaboration, consider these tips:

  • Start simple: When using tools like V0, begin with simple UI elements to get a feel for how the tool works19.
  • Be specific: The more specific you are in your descriptions and prompts, the better the results you’ll get from AI tools19.

The Future of Web Development

The future of web development is undoubtedly intertwined with AI. However, it’s not about replacing humans with machines. It’s about empowering humans with AI. By embracing collaboration, we can unlock a new era of web development, where websites and applications are built faster, better, and more efficiently than ever before.

This future will be characterized by:

  • Economic Impact: AI can help world businesses generate up to $15.7 trillion by 2030, with web applications being the primary source of this growth20.
  • Technological Adoption: AI will back up 80% of developing technologies by 202520.
  • More sophisticated code generation: AI will be able to generate entire applications based on high-level descriptions, with humans focusing on architecture and unique features13.
  • Advanced personalization: AI could create truly adaptive websites that change their content, layout, and functionality based on individual user preferences and behaviors13.
  • Predictive maintenance: AI systems could predict potential issues on websites before they occur, allowing for proactive maintenance and updates13.

Conclusion

AI-human collaboration is poised to revolutionize web development. By combining the unique strengths of humans and AI – human creativity and critical thinking with AI’s speed and efficiency – we can create a future where websites and applications are more powerful, user-friendly, and accessible than ever before. This collaboration will lead to increased efficiency, improved quality, enhanced creativity, and greater accessibility in web development. While challenges like integrating AI tools into existing workflows and ensuring ethical considerations remain, the potential benefits are too significant to ignore. As AI technology continues to evolve, we can expect even more exciting developments in the world of web development, leading to a future where building for the web is faster, smarter, and more intuitive.

Works cited

1. Building an app with AI: V0 + Cursor AI – WeAreBrain, accessed on December 19, 2024, https://wearebrain.com/blog/building-an-app-with-ai-v0-cursor-ai/

2. Cursor AI: A Guide With 10 Practical Examples – DataCamp, accessed on December 19, 2024, https://www.datacamp.com/tutorial/cursor-ai-code-editor

3. 10 AI Tools Transforming Web Development | by Victor Yakubu – JavaScript in Plain English, accessed on December 19, 2024, https://javascript.plainenglish.io/10-ai-tools-transforming-web-development-790b8ab2dfc2

4. The Ultimate Introduction to Cursor for Developers – Builder.io, accessed on December 19, 2024, https://www.builder.io/blog/cursor-ai-for-developers

5. Announcing v0: Generative UI – Vercel, accessed on December 19, 2024, https://vercel.com/blog/announcing-v0-generative-ui

6. Build a fullstack app in 7 minutes with v0 (Figma to code) – YouTube, accessed on December 19, 2024, https://www.youtube.com/watch?v=cyFVtaLy-bA

7. v0 by Vercel Review: Features, Pros, and Cons – 10Web, accessed on December 19, 2024, https://10web.io/ai-tools/v0-by-vercel/

8. Building UI Faster with Shadcn v0.dev: The New Frontier in Frontend Development, accessed on December 19, 2024, https://shaxadd.medium.com/building-ui-faster-with-shadcn-v0-dev-the-new-frontier-in-frontend-development-0a3fb21b7e0b

9. Maximizing outputs with v0: From UI generation to code creation – Vercel, accessed on December 19, 2024, https://vercel.com/blog/maximizing-outputs-with-v0-from-ui-generation-to-code-creation

10. The Future of Web Development and AI – Unicorn Platform, accessed on December 19, 2024, https://unicornplatform.com/blog/the-future-of-ai-and-web-development/

11. Human-AI Collaboration in Software Development Services – Scrums.com, accessed on December 19, 2024, https://www.scrums.com/blog/human-vs-ai-collaboration-in-software-development

12. How AI is Revolutionizing Web Development in 2024 – Bluebash, accessed on December 19, 2024, https://www.bluebash.co/blog/2024-ai-powered-web-development-trends/

13. Human-AI Collaboration: The Latest Trend in Web Development – ToXSL Technologies, accessed on December 19, 2024, https://toxsl.com/blog/402/human-ai-collaboration-the-latest-trend-in-web-development

14. What do you think about v0? : r/nextjs – Reddit, accessed on December 19, 2024, https://www.reddit.com/r/nextjs/comments/1fxhrd9/what_do_you_think_about_v0/

15. Human-AI Collaboration: A New Era In Software Development – Tekki Web Solutions, accessed on December 19, 2024, https://www.tekkiwebsolutions.com/blog/human-ai-collaboration/

16. Future Of Web Development After AI – Bobcares, accessed on December 19, 2024, https://bobcares.com/blog/future-of-web-development-after-ai/

17. The Impact of AI on Web Development: Current Trends and Future Horizons – Macrosoft Inc, accessed on December 19, 2024, https://www.macrosoftinc.com/the-impact-of-ai-on-web-development-trends-and-future/

18. Cursor AI, v0, and Bolt.new: An Honest Comparison of Today’s AI Coding Tools, accessed on December 19, 2024, https://carlrannaberg.medium.com/cursor-ai-v0-and-bolt-new-an-honest-comparison-of-todays-ai-coding-tools-b4277e1eb1f9

19. Vercel v0 and the future of AI-powered UI generation – LogRocket Blog, accessed on December 19, 2024, https://blog.logrocket.com/vercel-v0-ai-powered-ui-generation/

20. The future of web development: How AI will revolutionize the industry – Agility PR Solutions, accessed on December 19, 2024, https://www.agilitypr.com/pr-news/public-relations/the-future-of-web-development-how-ai-will-revolutionize-the-industry/

About Me

I am Nick Sudh. ML/AI guy turning complex algorithms into simple solutions | RPA wizard | Automation advocate | Sharing the magic of machine learning one post at a time.

©2026 Machine Learning Brilliance