alexfikl / pycaputo

Evaluate fractional integrals and solve fractional ODEs.
https://pycaputo.readthedocs.io
3 stars 0 forks source link

[Optimization] I tried to find the optimization using caputol1method #33

Closed Velmurugan1008 closed 8 months ago

Velmurugan1008 commented 9 months ago

import numpy as np import matplotlib.pyplot as plt from pycaputo.utils import Array from pycaputo import CaputoDerivative, CaputoL1Method, Side, diff from pycaputo.grid import make_uniform_points from pycaputo.utils import figure, set_recommended_matplotlib

Define the function

def f(x: Array) -> Array: return (x - 5) ** 2

Set up Caputo derivative and method

d = CaputoDerivative(order=0.9, side=Side.Left) method = CaputoL1Method(d)

Generate points for plotting and calculating numerical Caputo derivative

p = make_uniform_points(256, a=0.0, b=2.0) df_num = np.array([diff(method, f, np.array([x]))[0] for x in p.x])

Plot the Caputo derivative using the L1 method

set_recommended_matplotlib() with figure("caputo-derivative-l1.svg") as fig: ax = fig.gca() ax.plot(p.x, df_num, lw=5, label="$L1~ Method$") ax.set_xlabel("$x$") ax.set_ylabel(r"$D^\alpha_Cf$") ax.legend() plt.show()

Initial setup for optimization using CaputoL1Method

x = 0 learning_rate = 0.1 error = []

plt.plot(p.x, df_num, lw=5, label="$L1~ Method$") plt.xlabel('X') plt.ylabel(r"$D^\alpha_Cf$")

for i in range(25):

Use CaputoL1Method for optimization

grad = diff(method, f, np.array([x]))[0]
x = x - learning_rate * grad
y = (x - 5) ** 2
error.append(y)
plt.scatter(x, y)

# Print the values corresponding to each iteration
print(x, grad)

plt.show() I have some errors while finding the fractional gradient descent. Please help me to solve this issue. Thank you.

alexfikl commented 9 months ago

Is this based one some generalized Taylor formula like https://doi.org/10.1016/j.amc.2006.07.102?

What sort of errors are you getting? Does just running the example in examples/caputo-derivative-l1.py work? If not, you may be missing some dependencies or the package isn't installed correctly. I haven't gotten to publishing this on PyPI, so it might be a bit annoying to install..

alexfikl commented 9 months ago

for i in range(25):

Use CaputoL1Method for optimization

grad = diff(method, f, np.array([x]))[0]
x = x - learning_rate * grad
y = (x - 5) ** 2
error.append(y)

From a quick look at your code, the issue seems to be here. The diff(m, f, x) method computes the fractional derivative of the $f$ function on every point of the grid ${x_i}$. If you give it just one point, it doesn't work. Are you trying to compute like $D_C^\alpha(f)(b)$ for some fixed $b$ that you update during the optimization?

Maybe something like this

a, b = 0, 1
for i in range(25):
    # compute the Caputo derivative on a uniform [a, b]
    x = make_uniform_points(256, a=a, b=b)
    grad = diff(method, f, x)
    # grad[-1] contains D_C^\alpha[f](b) as the endpoint result
    # so we can use it here to update the value using the fractional
    # derivative
    b = b - learning_rate * grad[-1]

Or is it more like f = sum((x - 5)^2) for a vector $x$? In that case you'd have to compute the derivative component by component because the diff function doesn't work on vectors in that way.

Adding some wrapper around these functions is very welcome!

Velmurugan1008 commented 9 months ago

Thank you for taking the time to look at my code. Herewith, I have attached the code for optimization of the function

f(x)=(x-5)^2:

import numpy as np
import matplotlib.pyplot as plt

# Define the objective function
def objective_function(x):
    return (x - 5)**2

**# Define the gradient of the objective function
def gradient(x):
    return 2 * (x - 5)**

# Gradient Descent function
def gradient_descent(learning_rate, iterations):
    x = 10  # Initial guess
    history = []  # To store the history of x values for plotting

    for _ in range(iterations):
        history.append(x)
        x = x - learning_rate * gradient(x)

    return history

# Set hyperparameters
learning_rate = 0.1
iterations = 20

# Run gradient descent
trajectory = gradient_descent(learning_rate, iterations)

# Plot the objective function and the trajectory
x = np.linspace(-12, 12, 400)
y = objective_function(x)

plt.figure(figsize=(10, 5))
plt.plot(x, y, label='Objective Function')
plt.plot(trajectory, [objective_function(x) for x in trajectory], 'ro-', label='Trajectory')
plt.xlabel('x')
plt.ylabel('f(x)')
plt.title('Gradient Descent')
plt.legend()
plt.grid(True)
plt.show()

In the above code, find the gradient of the objective function based on an integer-order derivative.

*# Define the gradient of the objective function def gradient(x): return 2 (x - 5)**

But I would like to find the gradient of the objective function based on the Caputo L1 method in your code. This is our main goal: to find the fractional gradient instead of the integer-order gradient.

I look forward to your message regarding the above. Thank you

alexfikl commented 9 months ago

Ah, I see what you want to do! That requires some work to modify the current interface, but I agree that it's necessary.

I'll probably get to work on it over the winter holiday.

alexfikl commented 8 months ago

@Velmurugan1008 I added a wrapper (and an example) to compute the gradient at a given set of points only in #35. It's probably not as fast as it could be, so it will need more work if you want to scale.

You can use it as inspiration to make your own wrapper, if nothing else. Feel free to open more issues or pull requests though!