Teaching GPT-4 to write code from research papers

Ethan Steininger
4 min readMar 17, 2023

--

A new research paper came out on building reccomendation systems, authored by the ByteDance team: https://arxiv.org/pdf/2209.07663.pdf

I’m no scientist but I love to understand the latest and greatest in research, which is a perfect task for GPT-4.

Prepping the content

Since GPT-4 has a token size limit, we first split the paper into pages:

import os
import PyPDF2

# Define a function to split a PDF file into 1-page chunks and save them as separate files
def split_pdf(input_pdf_path, output_dir):
# Open the input PDF file
input_pdf = open(input_pdf_path, 'rb')
# Create a PDF reader object
pdf_reader = PyPDF2.PdfReader(input_pdf)

# Get the total number of pages in the PDF
num_pages = len(pdf_reader.pages)

# Split the PDF into 1-page chunks
for i in range(0, num_pages, 1):
# Create a new PDF writer object
pdf_writer = PyPDF2.PdfWriter()

# Add 1 page to the new PDF
for j in range(i, min(i+1, num_pages)):
pdf_writer.add_page(pdf_reader.pages[j])

# Save the new PDF as a separate file
output_path = os.path.join(output_dir, f'{i+1}-{i+1}.pdf')
with open(output_path, 'wb') as output_pdf:
pdf_writer.write(output_pdf)

# Close the input PDF file
input_pdf.close()

Now we create our gpt function to simply accept a prompt as string and return the response:

url = "https://api.openai.com/v1/chat/completions"

# Define a function to send a message to OpenAI API
def chatgpt(msg):
# Construct payload for API request
payload = json.dumps({
"model": "gpt-4",
"messages": [
{
"role": "user",
"content": msg
}
]
})
# Define headers for API request
headers = {
'Authorization': 'Bearer API_KEY', # Add your OpenAI API key here
'Content-Type': 'application/json'
}

# Send POST request to OpenAI API with payload and headers
response = requests.request("POST", url, headers=headers, data=payload)

# Print the response from the API
print(response.text)

Now we put it together, extract each page’s contents using tika and send the contents as a prompt to chatgpt with a quick qualifier:

# Send a message to OpenAI API
chatgpt("you are a machine learning engineer in academia, you will be accepting the following pages of research and build a class with methods that implements said research, understood?")

# Iterate through every file in the 'pages' folder
for filename in os.listdir("pages"):
file_path = os.path.join("pages", filename)
# Parse the PDF file and extract its content
parsed_pdf = parser.from_file(file_path)['content']
# Send the content to OpenAI API
if isinstance(parsed_pdf, str):
chatgpt(parsed_pdf)

# Send a message to OpenAI API
chatgpt("now write me a class with various methods that implements this research paper")

Generating the code

import numpy as np
import tensorflow as tf
from scipy.optimize import minimize

class AdaptiveOptimalControl:
def __init__(self, system_dynamics, neural_network_policy, gps_settings):
self.system_dynamics = system_dynamics
self.policy = neural_network_policy
self.gps_settings = gps_settings

def train(self, num_iterations, num_trajectories):
for iteration in range(num_iterations):
# Generate trajectories using the current policy
trajectories = self.generate_trajectories(num_trajectories)

# Update the policy using the guided policy search algorithm
self.guided_policy_search(trajectories)

def generate_trajectories(self, num_trajectories, time_steps):
trajectories = []
for _ in range(num_trajectories):
state = self.system_dynamics.initial_state()
trajectory = []
for t in range(time_steps):
action = self.policy(state)
next_state = self.system_dynamics(state, action)
trajectory.append((state, action, next_state))
state = next_state
trajectories.append(trajectory)
return trajectories

def guided_policy_search(self, trajectories):
local_models = self.trajectory_centric_reinforcement_learning(trajectories)
expert_trajectories = self.adaptive_optimal_control_step(local_models, trajectories)
self.supervised_learning_step(expert_trajectories, trajectories)

def trajectory_centric_reinforcement_learning(self, trajectories):
# Implement the trajectory-centric reinforcement learning algorithm
# You can use linear regression, Gaussian process regression, or other techniques
pass

def supervised_learning_step(self, expert_trajectories, policy_trajectories):
states, actions = zip(*[(state, action) for trajectory in expert_trajectories for state, action, _ in trajectory])
states = np.stack(states)
actions = np.stack(actions)

# Train the neural network policy using the expert trajectories
self.policy.train_on_batch(states, actions)

def adaptive_optimal_control_step(self, local_models, trajectories):
expert_trajectories = []
for trajectory in trajectories:
expert_trajectory = []
for state, _, _ in trajectory:
# Optimize the control signal using local linear models
action = self.optimize_control_signal(state, local_models)
next_state = self.system_dynamics(state, action)
expert_trajectory.append((state, action, next_state))
expert_trajectories.append(expert_trajectory)
return expert_trajectories

def optimize_control_signal(self, state, local_models):
objective = lambda action: self.evaluate_local_models(state, action, local_models)
initial_action = self.policy(state)
result = minimize(objective, initial_action, method='L-BFGS-B') # You can use other optimization algorithms
return result.x

def evaluate_local_models(self, state, action, local_models):
# Evaluate the cost function and dynamics models given the state, action, and local models
# This depends on the specific models you used in the trajectory-centric reinforcement learning step
pass

If you want all the code together: Github

Now this code isn’t obviously runnable out the gate, but it gives you a starting point for implementing something interesting. We can also ask it to summarize the implementation:

Title: “Spectrum-Preserving Generative Adversarial Networks for Anomaly Detection in High Dimensional Data”

Summary:

This paper introduces Spectrum-Preserving Generative Adversarial Networks (SP-GANs), a novel approach to anomaly detection in high-dimensional data. The key idea is to preserve the spectral structure of the training data while generating synthetic samples, which allows for improved anomaly detection.

SP-GANs consist of a generator and a discriminator, like standard GANs, but with a few important differences. The generator is designed to generate samples in the spectral domain rather than the spatial domain, and the discriminator is trained to distinguish between real and synthetic samples based on their spectral properties. This enables the preservation of the spectral structure of the training data, which is important for accurately capturing the characteristics of normal data and detecting anomalies.

The paper presents experimental results on several high-dimensional datasets, including image and sensor data, demonstrating the effectiveness of SP-GANs in anomaly detection tasks. The results show that SP-GANs outperform other state-of-the-art anomaly detection methods, particularly in high-dimensional settings.

As a software engineer, you can appreciate the novelty of the SP-GAN architecture and its potential applications in various domains that require effective anomaly detection, such as cybersecurity, fraud detection, and quality control.

Remarkable.

--

--

No responses yet