Ok, i did not code it myself, its from ChatGPT :-).
' Declare variables
DIM hidden_units AS LONG, input_units AS LONG, output_units AS LONG
DIM input_weights(1 TO input_units, 1 TO hidden_units) AS DOUBLE
DIM hidden_weights(1 TO hidden_units, 1 TO output_units) AS DOUBLE
DIM input_layer(1 TO input_units) AS DOUBLE
DIM hidden_layer(1 TO hidden_units) AS DOUBLE
DIM output_layer(1 TO output_units) AS DOUBLE
DIM expected_output(1 TO output_units) AS DOUBLE
DIM learning_rate AS DOUBLE
' Initialize weights with random values between -0.5 and 0.5
FOR i = 1 TO input_units
FOR j = 1 TO hidden_units
input_weights(i, j) = RND(-0.5, 0.5)
NEXT
NEXT
FOR i = 1 TO hidden_units
FOR j = 1 TO output_units
hidden_weights(i, j) = RND(-0.5, 0.5)
NEXT
NEXT
' Main loop
DO
' Feedforward
FOR i = 1 TO hidden_units
hidden_layer(i) = 0
FOR j = 1 TO input_units
hidden_layer(i) = hidden_layer(i) + input_layer(j) * input_weights(j, i)
NEXT
hidden_layer(i) = sigmoid(hidden_layer(i))
NEXT
FOR i = 1 TO output_units
output_layer(i) = 0
FOR j = 1 TO hidden_units
output_layer(i) = output_layer(i) + hidden_layer(j) * hidden_weights(j, i)
NEXT
output_layer(i) = sigmoid(output_layer(i))
NEXT
' Backpropagation
FOR i = 1 TO output_units
DIM error AS DOUBLE
error = (expected_output(i) - output_layer(i)) * sigmoid_derivative(output_layer(i))
FOR j = 1 TO hidden_units
hidden_weights(j, i) = hidden_weights(j, i) + learning_rate * error * hidden_layer(j)
NEXT
NEXT
FOR i = 1 TO hidden_units
DIM error AS DOUBLE
error = 0
FOR j = 1 TO output_units
error = error + (expected_output(j) - output_layer(j)) * sigmoid_derivative(output_layer(j)) * hidden_weights(i, j)
NEXT
error = error * sigmoid_derivative(hidden_layer(i))
FOR j = 1 TO input_units
input_weights(j, i) = input_weights(j, i) + learning_rate * error * input_layer(j)
NEXT
NEXT
LOOP UNTIL stop_condition
' Activation function
FUNCTION sigmoid(
At the End it failed so gave it a second try and got this:
' Declare constants and variables
#define INPUT_NEURONS 2
#define HIDDEN_NEURONS 3
#define OUTPUT_NEURONS 1
DIM input_neurons(INPUT_NEURONS) AS SINGLE
DIM hidden_neurons(HIDDEN_NEURONS) AS SINGLE
DIM output_neurons(OUTPUT_NEURONS) AS SINGLE
DIM weights_input_hidden(INPUT_NEURONS, HIDDEN_NEURONS) AS SINGLE
DIM weights_hidden_output(HIDDEN_NEURONS, OUTPUT_NEURONS) AS SINGLE
DIM bias_hidden(HIDDEN_NEURONS) AS SINGLE
DIM bias_output(OUTPUT_NEURONS) AS SINGLE
DIM learning_rate AS SINGLE
' Initialize weights and biases to random values
FOR i = 0 TO INPUT_NEURONS - 1
FOR j = 0 TO HIDDEN_NEURONS - 1
weights_input_hidden(i, j) = RND()
NEXT
NEXT
FOR i = 0 TO HIDDEN_NEURONS - 1
FOR j = 0 TO OUTPUT_NEURONS - 1
weights_hidden_output(i, j) = RND()
NEXT
NEXT
FOR i = 0 TO HIDDEN_NEURONS - 1
bias_hidden(i) = RND()
NEXT
FOR i = 0 TO OUTPUT_NEURONS - 1
bias_output(i) = RND()
NEXT
' Set learning rate
learning_rate = 0.1
' Train the network
FOR epoch = 1 TO 1000
' Feedforward
FOR i = 0 TO INPUT_NEURONS - 1
input_neurons(i) = ... ' Set input values here
NEXT
FOR i = 0 TO HIDDEN_NEURONS - 1
hidden_neurons(i) = 0
FOR j = 0 TO INPUT_NEURONS - 1
hidden_neurons(i) = hidden_neurons(i) + input_neurons(j) * weights_input_hidden(j, i)
NEXT
hidden_neurons(i) = hidden_neurons(i) + bias_hidden(i)
hidden_neurons(i) = 1 / (1 + EXP(-hidden_neurons(i))) ' Sigmoid activation function
NEXT
FOR i = 0 TO OUTPUT_NEURONS - 1
output_neurons(i) = 0
FOR j = 0 TO HIDDEN_NEURONS - 1
output_neurons(i) = output_neurons(i) + hidden_neurons(j) * weights_hidden_output(j, i)
NEXT
output_neurons(i) = output_neurons(i) + bias_output(i)
output_neurons(i) = 1 / (1 + EXP(-output_neurons(i))) ' Sigmoid activation function
Ok that was not far from what i wanted.
QuoteIch kann Ihnen ein Beispiel für ein kleines neuronales Netzwerk mit Backpropagation in Powerbasic geben. Beachten Sie jedoch, dass dieses Beispiel nur zu illustrativen Zwecken dient und möglicherweise nicht für die Verwendung in einer produktionsbereiten Anwendung geeignet ist.
Hier ist der Code:
' Set up the network with 2 input neurons, 2 hidden neurons, and 1 output neuron
DIM numInputs AS LONG = 2
DIM numHidden AS LONG = 2
DIM numOutputs AS LONG = 1
' Set the learning rate (alpha)
DIM alpha AS DOUBLE = 0.5
' Initialize the weight matrices with random values between -0.5 and 0.5
DIM weightsInputToHidden(numInputs, numHidden) AS DOUBLE
DIM weightsHiddenToOutput(numHidden, numOutputs) AS DOUBLE
FOR i = 1 TO numInputs
FOR j = 1 TO numHidden
weightsInputToHidden(i, j) = RND(-0.5, 0.5)
NEXT j
NEXT i
FOR i = 1 TO numHidden
FOR j = 1 TO numOutputs
weightsHiddenToOutput(i, j) = RND(-0.5, 0.5)
NEXT j
NEXT i
' Set up the input data as a matrix with one row per input pattern
DIM inputData(2, 3) AS DOUBLE
inputData(1, 1) = 1
inputData(2, 1) = 0
inputData(1, 2) = 0
inputData(2, 2) = 1
inputData(1, 3) = 1
inputData(2, 3) = 1
' Set up the expected output data as a matrix with one row per output pattern
DIM expectedOutputData(1, 3) AS DOUBLE
expectedOutputData(1, 1) = 0
expectedOutputData(1, 2) = 1
expectedOutputData(1, 3) = 1
' Train the network using backpropagation
FOR epoch = 1 TO 1000
FOR pattern = 1 TO 3
' Set up the input and expected output for this pattern
DIM input(1 TO numInputs) AS DOUBLE
DIM expectedOutput(1 TO numOutputs) AS DOUBLE
FOR i = 1 TO numInputs
input(i) = inputData(i, pattern)
NEXT i
FOR i = 1 TO numOutputs
expectedOutput(i) = expectedOutputData(i, pattern)
NEXT i
' Feed the input forward through the network to compute the output
DIM hidden(1 TO numHidden) AS DOUBLE
FOR i = 1 TO numHidden
hidden(i) = 0
FOR j = 1 TO numInputs
hidden(i) = hidden(i) + weightsInputToHidden(j, i) * input(j)
NEXT j
hidden(i) = 1 / (1 + EXP(-hidden(i))) ' Apply the sigmoid activation function
NEXT i
DIM output(1 TO numOutputs) AS DOUBLE
FOR i = 1 TO numOutputs
output(i) = 0
FOR j = 1 TO numHidden
output(i) = output(i) + weightsHiddenToOutput(j, i) * hidden(j)
NEXT j
output(i) = 1 / (1 + EXP(-output(i))) ' Apply the sigmoid activation function
NEXT i
' Compute the error for this pattern
DIM error(1 TO numOutputs) AS DOUBLE
FOR i = 1 TO numOutputs
error(i) = expectedOutput(i) - output(i)
NEXT i
' Backpropagate the error to update the weight matrices
DIM deltaHidden(1 TO numHidden) AS DOUBLE
FOR i = 1 TO numHidden
deltaHidden(i) = 0
FOR j = 1 TO numOutputs
deltaHidden(i) = deltaHidden(i) + error(j) * output(j) * (1 - output(j)) * weightsHiddenToOutput(i, j)
NEXT j
NEXT i
DIM deltaInput(1 TO numInputs) AS DOUBLE
FOR i = 1 TO numInputs
deltaInput(i) = 0
FOR j = 1 TO numHidden
deltaInput(i) = deltaInput(i) + deltaHidden(j) * hidden(j) * (1 - hidden(j)) * weightsInputToHidden(i, j)
NEXT j
NEXT i
' Update the weight matrices
FOR i = 1 TO numInputs
FOR j = 1 TO numHidden
weightsInputToHidden(i, j) = weightsInputToHidden(i, j) + alpha * input(i) * deltaHidden(j)
NEXT j
NEXT i
FOR i = 1 TO numHidden
FOR j = 1 TO numOutputs
weightsHiddenToOutput(i, j) = weightsHiddenToOutput(i, j) + alpha * hidden(i) * error(j)
NEXT j
NEXT i
NEXT pattern
NEXT epoch
' Test the network
DIM testInput(1 TO 2) AS DOUBLE
testInput(1) = 1
testInput(2) = 0
DIM testHidden(1 TO 2) AS DOUBLE
FOR i = 1 TO numHidden
testHidden(i) = 0
FOR j = 1 TO numInputs
testHidden(i) = testHidden(i) + weightsInputToHidden(j, i) * testInput(j)
NEXT j
testHidden(i) = 1 / (1 + EXP(-testHidden(i))) ' Apply the sigmoid activation function
NEXT i
DIM testOutput(1 TO 1) AS DOUBLE
FOR i = 1 TO numOutputs
testOutput(i) = 0
FOR j = 1 TO numHidden
testOutput(i) = testOutput(i) + weightsHiddenToOutput(j, i) * testHidden(j)
NEXT j
testOutput(i) = 1 / (1 + EXP(-testOutput(i))) ' Apply the sigmoid activation function
NEXT i
' Print the output
PRINT "Output for test input (1, 0): "; testOutput(1)
QuoteEs gibt einige Möglichkeiten, wie das oben gezeigte Beispiel für das Erkennen von Buchstaben verbessert werden könnte. Hier sind einige Ideen:
Das neuronale Netzwerk könnte größer gemacht werden, um mehr Leistung zu ermöglichen.
Ein besseres Aktivierungsfunktion könnte verwendet werden. Die Sigmoid-Funktion, die im obigen Beispiel verwendet wird, ist möglicherweise nicht die beste Wahl für alle Anwendungen. Andere Aktivierungsfunktionen wie die ReLU-Funktion (Rectified Linear Unit) können in manchen Fällen bessere Ergebnisse liefern.
Ein anderer Optimierungsalgorithmus könnte verwendet werden. Statt Backpropagation könnte man zum Beispiel Adam verwenden, um die Gewichte des Netzwerks anzupassen.
Eine größere und bessere Trainingsdatenmenge könnte verwendet werden, um das Netzwerk zu trainieren. Dies würde dazu beitragen, dass das Netzwerk allgemeiner wird und besser auf neue, ungesehene Daten generalisiert.
Eine andere Architektur für das neuronale Netzwerk könnte verwendet werden, z.B. ein convolutional neural network (CNN) oder ein long short-term memory (LSTM) network. Diese Architekturen eignen sich besonders gut für die Verarbeitung von Bildern oder Sequenzen von Daten und könnten in diesem Fall von Vorteil sein.
Ich hoffe, dass diese Ideen Ihnen helfen, das Beispiel zu verbessern und es für das Erkennen von Buchstaben anzupassen.