logo
Published on

Developing Fractal Ranges- Part 2

Authors

Applying the ideas from the last part 1

Theoretical discussions about analyzing time series data are fascinating, but it's equally crucial to put these ideas into practice and observe the outcomes. In this post, I'll showcase some of the concepts I discussed in part 1 using basic Python tools as a starting point.

Garbage in Garbage out

Before diving into analysis, I need data. Professional datasets can be pricey, and setting up API keys might be overkill for this preliminary exploration. Hence, I've opted for yFinance, a popular library for fetching straightforward financial data. This allows me to access closing prices for most common equities.

However, data often requires some tidying up. Anticipating multiple data cleaning sessions in the future, I've decided to craft a class that offers various methods to process time series data based on the required "cleanliness" level.

import pandas as pd

class DataCleaner:
    def __init__(self, data):
        self.data = data

    def remove_missing(self):
        """Remove rows with missing values."""
        self.data.dropna(inplace=True)

    def get_cleaned_data(self):
        """Return the cleaned data."""
        return self.data

# Usage:
# Assuming you've fetched data from yfinance into a DataFrame called 'df':
# cleaner = DataCleaner(df)
# cleaned_data = cleaner.get_cleaned_data()

This initial version is basic, but it's extensible. For now, it ensures the absence of empty values in the data, which could otherwise lead to errors. From this cleaned data, I can extract closing price and volume information for further calculations.

Calculating the hurst exponent... the easy way

While it's possible to calculate the Hurst exponent manually, I've opted for a more efficient route. Thanks to Dimitry Mottl and his handy hurst library, I can offload most of the heavy lifting.

pip install hurst

Experimenting with the data

After extensive experimentation with integrating the Hurst calculation into my model, I've arrived at a preliminary design. It feels intuitively right, especially when I think about the Sierpinski triangle. While I foresee some tweaks and adjustments down the line, my immediate goal is to have a functional model that can produce tangible predictions.

My current approach has a fractal-esque design. I might be biting off more than I can chew by also weaving in volatility and volume. However, understanding the Hurst exponents of price, volatility, and volume could be enlightening.

Nuances of the Hurst exponent calculation

There are multiple ways to calculate the Hurst exponent. One method is the Rescaled Range (R/S) method, which incorporates a window in its calculations. This method allowed me to experiment with ranges reminiscent of the "Trade, Trend, Tail" ranges from Hedgeye. I've approximated three sets of ranges for each asset:

  • A short range of 3 to 15 days,
  • A medium range of 15 to 60 days,
  • A longer range spanning 60 to 756 days.

Garbage out.. or at least set aside for now

Ideas that ended up in the recycling bin included:

  • employing a Fourier transform to detect cyclicality within the time series and then use this as a range for the Hurst calculation.
  • using volatility buckets to adjust the window sizes to account for differing environments.
  • using manually coded versions of the hurst calculation that could go in more depth
  • created a volume adjusted time series that applied prices to a volume "clock"

Working class Hurst computer

Below is the class I've developed to perform the Hurst calculations.

import numpy as np
from hurst import compute_Hc

class HurstAnalyzer:
    def __init__(self, data):
        self.data = data
        self.standard_ranges = [(5, 15), (15, 60), (60, 756)]

    def compute_hurst_for_series(self, series, kind):
        """Compute the Hurst exponent for a given series across standard ranges."""
        hurst_values = []

        series = series.dropna()

        for window_range in self.standard_ranges:
            min_window, max_window = window_range
            H, _, _ = compute_Hc(series, kind=kind, min_window=min_window, max_window=max_window, simplified=False)
            hurst_values.append(H)

        return hurst_values

    def hurst_for_price(self):
        """Compute the Hurst exponent for the price series."""
        return self.compute_hurst_for_series(self.data['Close'], kind='price')

    def hurst_for_volume(self):
        """Compute the Hurst exponent for the volume series."""
        return self.compute_hurst_for_series(self.data['Volume'], kind='random_walk')

    def hurst_for_volatility(self):
        """Compute the Hurst exponent for the volatility series."""
        returns = self.data['Close'].pct_change().dropna()
        volatility = returns.rolling(window=10).std()  # Using a 10-day rolling window for volatility
        return self.compute_hurst_for_series(volatility, kind='change')

# Usage example:
# analyzer = HurstAnalyzer(data)
# print(analyzer.hurst_for_price())
# print(analyzer.hurst_for_volume())
# print(analyzer.hurst_for_volatility())

I'm pleased with the output from a few sample runs. Here's an example output for the symbol URA:

{
  "price_hurst_exponent": [
    0.6257539225064083,
    0.5829486456131223,
    0.5548945903163345
  ],
  "volatility_hurst_exponent": [
    0.9173617419599562,
    0.8948432209174462,
    0.8904415607052489
  ],
  "volume_hurst_exponent": [
    0.3682909790672936,
    0.3261055020352888,
    0.3245664632417977
  ]
}

Thoughts on the output results

The data appears to be logically consistent. The Hurst values for price tend to gravitate around 0.5, aligning with the general perception of it being "random". Deviations from this norm in shorter time frames might signal potential price opportunities, either indicating an impending reversion or a sustained trend.

The Hurst measure for volatility is also intuitive. A value approaching one suggests that the volatility oscillates around a mean.

However, the Hurst exponent for volume is less straightforward. The results varied considerably across assets, which might hint at an inappropriate "kind" being used in the calculation. This aspect warrants further consideration and the reasons for this are varied:

  • data quality
  • data preprocessing
  • different "kind"
  • segmentation of the dataset into different periods (fourier transform?)

Iteration on the calculation will undoubtabley be required.

Final thoughts and next steps

Acquiring the Hurst exponent values for price, volume, and volatility lays a solid foundation for my model and promises to be useful in my future endeavors, layering in the 3 different time frames also adds some dimensionality to the model. While the current calculations might require further refinement, in my upcoming post, I aim to delve into their potential applications. I'll explore concepts centered on variations in values across different time frames and how to leverage these values to develop a predictive model.

Regards

MP