TimeSeriesGrammianAngularFieldLibrary "TimeSeriesGrammianAngularField"
provides Grammian angular field and associated utility functions.
___
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
www.researchgate.net
method normalize(data, a, b)
Normalize the series to a optional range, usualy within `(-1, 1)` or `(0, 1)`.
Namespace types: array
Parameters:
data (array) : Sample data to normalize.
a (float) : Minimum target range value, `default=-1.0`.
b (float) : Minimum target range value, `default= 1.0`.
Returns: Normalized array within new range.
___
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
normalize_series(source, length, a, b)
Normalize the series to a optional range, usualy within `(-1, 1)` or `(0, 1)`.\
*Note that this may provide a different result than the array version due to rolling range*.
Parameters:
source (float) : Series to normalize.
length (int) : Number of bars to sample the range.
a (float) : Minimum target range value, `default=-1.0`.
b (float) : Minimum target range value, `default= 1.0`.
Returns: Normalized series within new range.
method polar(data)
Turns a normalized sample array into polar coordinates.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Converted array into polar coordinates.
polar_series(source)
Turns a normalized series into polar coordinates.
Parameters:
source (float) : Source series.
Returns: Converted series into polar coordinates.
method gasf(data)
Gramian Angular Summation Field *`GASF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GASF`* values.
method gasf_id(data)
Trig. identity of Gramian Angular Summation Field *`GASF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GASF`* values.
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
method gadf(data)
Gramian Angular Difference Field *`GADF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GADF`* values.
method gadf_id(data)
Trig. identity of Gramian Angular Difference Field *`GADF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GADF`* values.
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
MATH
LIB_TradeAssistLibrary "LIB_TradeAssist"
This library is a collection of assistence tools saving me the need to copy same code again and again in my various indicators and strategies.
Slop_Magnitude(val_now, val_older, mult_factor)
Calculate the slop magnetude betwen current price and an older price. Since the change is usually minimal, we multiply it by def value of 3000 to make it usable.You can optionally pass other multiply factor
Parameters:
val_now (float)
val_older (float)
mult_factor (float)
Returns: : Slop angle magnetude
series_collectionLibrary "series_collection"
A personal collection of commonly used series types like moving averages that are supported directly by
the pinescript library ('ALMA', 'DEMA', 'EMA', 'HMA', 'RMA', 'SMA', 'SWMA', 'VWMA', 'WMA'), highest and lowest source,
median and pivots. One single function (with overloads) that can be configured easily by the user input and can be
used as a core piece of functionality for many user cases. This library was created to abstract away and re-use this
commonly used functionality in my "Two MA Signal Indicator" script and the "Template Trailing Strategy" script. Both
of them use the "two_ma_logic" for defining entry and exit signals. While this piece of work does not contain any
novel mathematical expressions and just adds a convinient (and configurable) way to do things, I hope that might add
value to other scripts as well and future projects.
cust_series(length, seriesType, source)
cust_series - Calculate the custom series of the given source for the given length and type
Parameters:
length (simple int) : - The length of the custom series
seriesType (simple string) : - The type of the custom series
source (float) : - The source of the values
Returns: - The resulting value of the calculations of the custom series
cust_series(length, seriesType, source)
cust_series - Calculate the custom series of the given source for the given length and type
Parameters:
length (simple float) : - The length of the custom series (ceiled)
seriesType (simple string) : - The type of the custom series
source (float) : - The source of the values
Returns: - The resulting value of the calculations of the custom series
TimeSeriesClassificationActivationFunctionsLibrary "TimeSeriesClassificationActivationFunctions"
Provides some activation functions useful in time series classification.
___
reference:
github.com
method scale(dist, weights)
Activate values by a normalized scale.
Namespace types: map
Parameters:
dist (map) : Source distribution map.
weights (map) : Weights distribution map.
Returns: Normalized distribution map.
method softmax(dist, weights)
Activate values with a softmax algorithm.
Namespace types: map
Parameters:
dist (map) : Source distribution map.
weights (map) : Weights distribution map.
Returns: Normalized distribution map.
method argmax(dist, weights)
Activate values with a argmax algorithm.
Namespace types: map
Parameters:
dist (map) : Source distribution map.
weights (map) : Weights distribution map.
Returns: first key of argmax value of the transformed distribution.
MatrixScaleDownLibrary "MatrixScaleDown"
Provides a function to scale down a matrix into a smaller square format were its values are averaged to mantain matrix topology.
method scale_down(mat, size)
scale a matrix to a new smaller square size.
Namespace types: matrix
Parameters:
mat (matrix) : Source matrix.
size (int) : New matrix size.
Returns: New matrix with scaled down size. Source values will be averaged together.
lib_fvgLibrary "lib_fvg"
further expansion of my object oriented library toolkit. This lib detects Fair Value Gaps and returns them as objects.
Drawing them is a separate step so the lib can be used with securities. It also allows for usage of current/close price to detect fill/invalidation of a gap and to adjust the fill level dynamically. FVGs can be detected while forming and extended indefinitely while they're unfilled.
method draw(this)
Namespace types: FVG
Parameters:
this (FVG)
method draw(fvgs)
Namespace types: FVG
Parameters:
fvgs (FVG )
is_fvg(mode, precondition, filter_insignificant, filter_insignificant_atr_factor, live)
Parameters:
mode (int) : switch for detection 1 for bullish FVGs, -1 for bearish FVGs
precondition (bool) : allows for other confluences to block/enable detection
filter_insignificant (bool) : allows to ignore small gaps
filter_insignificant_atr_factor (float) : allows to adjust how small (compared to a 50 period ATR)
live (bool) : allows to detect FVGs while the third bar is forming -> will cause repainting
Returns: a tuple of (bar_index of gap bar, gap top, gap bottom)
create_fvg(mode, idx, top, btm, filled_at_pc, config)
Parameters:
mode (int) : switch for detection 1 for bullish FVGs, -1 for bearish FVGs
idx (int) : the bar_index of the FVG gap bar
top (float) : the top level of the FVG
btm (float) : the bottom level of the FVG
filled_at_pc (float) : the ratio (0-1) that the fill source needs to retrace into the gap to consider it filled/invalidated/ready for removal
config (FVGConfig) : the plot configuration/styles for the FVG
Returns: a new FVG object if there was a new FVG, else na
detect_fvg(mode, filled_at_pc, precondition, filter_insignificant, filter_insignificant_atr_factor, live, config)
Parameters:
mode (int) : switch for detection 1 for bullish FVGs, -1 for bearish FVGs
filled_at_pc (float)
precondition (bool) : allows for other confluences to block/enable detection
filter_insignificant (bool) : allows to ignore small gaps
filter_insignificant_atr_factor (float) : allows to adjust how small (compared to a 50 period ATR)
live (bool) : allows to detect FVGs while the third bar is forming -> will cause repainting
config (FVGConfig)
Returns: a new FVG object if there was a new FVG, else na
method update(this, fill_src)
Namespace types: FVG
Parameters:
this (FVG)
fill_src (float) : allows for usage of different fill source series, e.g. high for bearish FVGs, low vor bullish FVGs or close for both
method update(all, fill_src)
Namespace types: FVG
Parameters:
all (FVG )
fill_src (float)
method remove_filled(unfilled_fvgs)
Namespace types: FVG
Parameters:
unfilled_fvgs (FVG )
method delete(this)
Namespace types: FVG
Parameters:
this (FVG)
method delete_filled_fvgs_buffered(filled_fvgs, max_keep)
Namespace types: FVG
Parameters:
filled_fvgs (FVG )
max_keep (int) : the number of filled, latest FVGs to retain on the chart.
FVGConfig
Fields:
box_args (|robbatt/lib_plot_objects/36;BoxArgs|#OBJ)
line_args (|robbatt/lib_plot_objects/36;LineArgs|#OBJ)
box_show (series__bool)
line_show (series__bool)
keep_filled (series__bool)
extend (series__bool)
FVG
Fields:
config (|FVGConfig|#OBJ)
startbar (series__integer)
mode (series__integer)
top (series__float)
btm (series__float)
center (series__float)
size (series__float)
fill_size (series__float)
fill_lvl_target (series__float)
fill_lvl_current (series__float)
fillbar (series__integer)
filled (series__bool)
_fvg_box (|robbatt/lib_plot_objects/36;Box|#OBJ)
_fill_line (|robbatt/lib_plot_objects/36;Line|#OBJ)
AllTimeHighLowLibrary "AllTimeHighLow"
Provides functions calculating the all-time high/low of values.
hi(val)
Calculates the all-time high of a series.
Parameters:
val (float) : Series to use (`high` is used if no argument is supplied).
Returns: The all-time high for the series.
lo(val)
Calculates the all-time low of a series.
Parameters:
val (float) : Series to use (`low` is used if no argument is supplied).
Returns: The all-time low for the series.
VPQuantLibLibrary "VPQuantLib"
Misc of math, position size and consolidation detection functions that can be used accross various scripts.
isPercentAboveReference(current, percent, reference, or_equal)
Checks if the current value is bigger (or equal) with the provided percent value to the reference
Parameters:
current (float) : - what to check against the reference
percent (float) : - what is the percent to check for difference
reference (float) : - what to compare against
or_equal (bool) : - enables checking for bigger or equal
Returns: true if the current is percent bigger (or equal) to the reference
isPercentBelowReference(current, percent, reference, or_equal)
Checks if the current value is smaller (or equal) with the provided percent value to the reference
Parameters:
current (float) : - what to check against the reference
percent (float) : - what is the percent to check for difference
reference (float) : - what to compare against
or_equal (bool) : - enables checking for smaller or equal
Returns: true if the current is percent smaller (or equal) to the reference
isInRange(current, reference, min_percent, max_percent, below)
Checks if the current value is greater/smaller than the reference value within the provided percent range
Parameters:
current (float) : - what to check for being in range against the refenence
reference (float) : - what to compare against
min_percent (float) : - the min percent range border
max_percent (float) : - the max percent range border
below (bool) : - check if below or above the reference
@return true if the current is bigger/smaller than the reference withing the percent range provided
GetRiskBasedPositionSize(account_balance, equity_risk_perc, max_loss_per_share)
Calculates and returns the positins size based on risk of the equity
Parameters:
account_balance (float) : - total account balance
equity_risk_perc (int) : - percent of equity to risk in the trade
max_loss_per_share (float) : - maximum loss per share (in currency, not in %) that we're willing to loose (calc based on the entry_price-stop_loss_price)
@return number of shares to buy
CheckInRangeConsolidation(consolidation_period, allowed_consolidation_range, ref_high, ref_low, prev_bar_consolidaton, draw_consolidation_lines)
Checks if the current bar is in a consolidation range
Parameters:
consolidation_period (int) : - the number of bars to consider for consolidation range calculation
allowed_consolidation_range (int) : - the percentage range allowed for the current consolidation range to be considered valid
ref_high (float) : - the reference high value to use for consolidation range calculation
ref_low (float) : - the reference low value to use for consolidation range calculation
prev_bar_consolidaton (bool)
draw_consolidation_lines (bool) : - a boolean indicating if consolidation range lines should be drawn on the chart
@return a tuple of three values:
1. _curr_consolidation - a boolean indicating if the current bar is in consolidation range
2. _curr_consolidation_low - the current consolidation low value
3. _curr_consolidation_high - the current consolidation high value
FindBasicConsolidation(loopback_period, consolidation_length, ref_high, ref_low, draw_consolidation_lines)
Finds a basic consolidation areas, looking back 1000 bars to find the pivot of the trend and checks if the current bar is in consolidation area counting the
number of bars that have not broken the consolidation high/low levels
Parameters:
loopback_period (int) : - the number of bars to look back to determine the high/low watermark
consolidation_length (int) : - minimum number of bars required to establish a consolidation period
ref_high (float) : - user input for high (can be based on the bar or wicks)
ref_low (float) : - user input for high (can be based on the bar or wicks)
draw_consolidation_lines (bool) : - enable/disable drawing of the consolidation lines
Returns: _pivot_point - pivot point
commonThe "Pineify/common" library presents a specialized toolkit crafted to empower traders and script developers with state-of-the-art time manipulation functions on the TradingView platform. It is instead a foundational utility aimed at enriching your script's ability to process and interpret time-based data with unparalleled precision.
Key Features
String Splitter:
The 'str_split_into_two' function is a universal string handler that separates any given input into two distinct strings based on a specified delimiter. This function is especially useful in parsing time strings or any scenario where a string needs to be divided into logical parts efficiently.
Example:
= str_split_into_two("a:b", ":")
// a = "a"
// b = "b"
Time Parser:
With 'time_to_hour_minute', users can effortlessly convert a time string into numerical hours and minutes. This function is pivotal for those who need to exact specific time series data or wish to schedule their trades down to the minute.
Example:
= time_to_hour_minute("02:30")
// time_hour = 2
// time_minute = 30
Unix Time Converter
The 'time_range_to_unix_time' function transcends traditional boundaries by converting a given time range into Unix timestamp format. This integration of date, time, and timezone, accounts for a comprehensive approach, allowing scripts to make timed decisions, perform historical analyses, and account for international markets across different time zones.
Example:
// Support 'hhmm-hhmm' and 'hh:mm-hh:mm'
= time_range_to_unix_time("09:30-12:00")
Summary:
Each function is meticulously designed to minimize complexity and maximize versatility. Whether you are a programmer seeking to streamline your code, or a trader requiring precise timing for your strategies, our library provides the logical framework that aligns with your needs.
The "Pineify/common" library is the bridge between high-level time concepts and actionable trading insights. It serves a multitude of purposes – from crafting elegant time-based triggers to dissecting complex string data. Embrace the power of precision with "Pineify/common" and elevate your TradingView scripting experience to new heights.
Mad_FibonacciboxLibrary "Mad_Fibonaccibox"
This library is designed to create and manage multiple Fibonacci boxes, which are graphical representations based on the inputs.
-----------------
exports:
f_fib_calc(_Fibonacci_box, _itemnumber)
fibonacci calc.
@description This function block uses the levels and paramters set into the type_fibonacci_box(levels) and fills the corresponding array of prices.
Parameters:
_Fibonacci_box (type_Fibonacci_box )
_itemnumber (int)
Returns: returns a type_Fibonacci_box with the filled data
f_fib_draw(_Fibonacci_box, _itemnumber)
fibonacci draw.
@description This function block uses the levels, prices and paramters set into the type_fibonacci_box(levels) and draws the fib on the chart
Parameters:
_Fibonacci_box (type_Fibonacci_box )
_itemnumber (int)
Returns: returns lines labels and fills on the chart, no data returns
type_level
s for defining a lines and texts of a fibonacci box
Fields:
level (series float)
price (series float)
drawline (series bool)
linewidth (series int)
linetype (series string)
fiblinecolor (series color)
drawlabel (series string)
labeltext (series string)
textshift (series int)
fibtextcolor (series color)
fibtextsize (series string)
transp (series int)
type_fill
s for defining the fills of a fibonaccibox
Fields:
partner_A (series int)
partner_B (series int)
fill_color (series color)
transp (series int)
type_Fibonacci_box
s for defining a fibonacci box
Fields:
bottom_price (series float)
top_price (series float)
StartBar (series int)
StopBar (series int)
levels (type_level )
fills (type_fill )
ChartisLog (series bool)
fibreverse (series bool)
fibdrawreverse (series bool)
decimals_price (series int)
decimals_percent (series int)
drawlines (series bool)
drawlabels (series bool)
drawfills (series bool)
draw_biginfo (series bool)
biginfo_textshift (series int)
rangeinfo_location (series int)
rangeinfo_color (series color)
rangeinfo_textsize (series string)
line_array (line )
linefill_array (linefill )
label_array (label )
lib_mathLibrary "lib_math"
a collection of functions calculating without history operator to avoid max_bars_back errors
mean(value, reset)
Parameters:
value (float) : series to track
reset (bool) : flag to reset tracking
@return returns average/mean of value since last reset
vwap(value, reset)
Parameters:
value (float) : series to track
reset (bool) : flag to reset tracking
@return returns vwap of value and volume since last reset
variance(value, reset)
Parameters:
value (float) : series to track
reset (bool) : flag to reset tracking
@return returns variance of value since last reset
trend(value, reset)
Parameters:
value (float) : series to track
reset (bool) : flag to reset tracking
@return where slope is the trend direction, correlation is a measurement for how well the values fit to the trendline (positive means ), stddev is how far the values deviate from the trend, x1 would be the time where reset is true and x2 would be the current time
Price - TP/SLPrices
With this library, you can easily manage prices such as stop loss, take profit, calculate differences, prices from a lower timeframe, and get the order size and commission from the strategy properties tab.
Note that the order size and commission only work with strategies!
Usage
Take Profit & Stop Loss
var bool open_trade = false
open_trade := strategy.position_size != 0
bars_since_opened = strategy.opentrades > 0 ? bar_index - strategy.opentrades.entry_bar_index(strategy.opentrades - 1) + 1 : 0
// ############################################################
// # TAKE PROFIT
// ############################################################
take_profit = input.string(title='Take Profit', defval='OFF', options= , group='TAKE PROFIT')
take_profit_percentage = input.float(title='Take Profit (% or X)', defval=0, minval=0, step=0.1, group='TAKE PROFIT')
take_profit_bars = input.int(title='Take Profit Bars', defval=0, minval=0, step=1, group='TAKE PROFIT')
take_profit_indication = input.string(title='Take Profit Plot', defval='OFF', options= , group='TAKE PROFIT')
take_profit_color = input.color(title='Take Profit Color', defval=#26A69A, group='TAKE PROFIT')
take_profit_price = math.round_to_mintick(strategy.position_avg_price)
take_profit_plot = plot(take_profit == 'ON' and take_profit_indication == 'ON' and open_trade and bars_since_opened >= take_profit_bars and take_profit_percentage > 0 and nz(take_profit_price) ? take_profit_price : na, color=take_profit_color, style=plot.style_linebr, linewidth=1, title='TP', offset=0)
// ############################################################
// # STOP LOSS
// ############################################################
stop_loss = input.string(title='Stop Loss', defval='OFF', options= , group='STOP LOSS')
stop_loss_percentage = input.float(title='Stop Loss (% or X)', defval=0, minval=0, step=0.1, group='STOP LOSS')
stop_loss_bars = input.int(title='Stop Loss Bars', defval=0, minval=0, step=1, group='STOP LOSS')
stop_loss_indication = input.string(title='Stop Loss Plot', defval='OFF', options= , group='STOP LOSS')
stop_loss_color = input.color(title='Stop Loss Color', defval=#FF5252, group='STOP LOSS')
stop_loss_price = math.round_to_mintick(strategy.position_avg_price)
stop_loss_plot = plot(stop_loss == 'ON' and stop_loss_indication == 'ON' and open_trade and bars_since_opened >= stop_loss_bars and stop_loss_percentage > 0 and nz(stop_loss_price) ? stop_loss_price : na, color=stop_loss_color, style=plot.style_linebr, linewidth=1, title='SL', offset=0)
// ############################################################
// # STRATEGY
// ############################################################
var limit_price = 0.0
var stop_price = 0.0
limit_price := take_profit == 'ON' ? price.take_profit_price(take_profit_price, take_profit_percentage, take_profit_bars, bars_since_opened) : na
stop_price := stop_loss == 'ON' ? price.stop_loss_price(stop_loss_price, stop_loss_percentage, stop_loss_bars, bars_since_opened) : na
strategy.exit(id='TP/SL', comment='TP/SL', from_entry='LONG', limit=limit_price, stop=stop_price)
Calculate difference between 2 prices:
price.difference(close, close )
Get last price from lower timeframe:
price.ltf(request.security_lower_tf(ticker, '1', close))
Get the order size from the properties tab:
price.order_size()
Get the commission from the properties tab.
price.commission()
map_custom_value_usefullLibrary "map_custom_value_usefull"
makes it possible to create:
1.map with array value:
for this purpose need:
1.create map with arrays type value
2.put your array in this map, overloaded put method itself will put the array based on the type into the required field
3.next you can get this array with help standard get function, which will determine which field you need to get.(But because of this, only arrays of the same type can be used in one map)
2.map with map value:
for this purpose need:
1.create map with maps type value
2.put your other map in how value in your based map, need you need to put it in the field corresponding to your map type
3.next you can get this map with help standard get function.You need to specify a special field name here, because the get function cannot be overloaded without additional variables(
map_custom_value_fullLibrary "map_custom_value_full"
makes it possible to create:
1.map with array value:
for this purpose need:
1.create map with arrays type value
2.put your array in this map, overloaded put method itself will put the array based on the type into the required field
3.next you can get this array with help standard get function, by specifying the type field of your array
2.map with map value:
for this purpose need:
1.create map with maps type value
2.put your other map in how value in your based map, need you need to put it in the field corresponding to your map type
3.next you can get this map with help standard get function, by specifying the type field of your array
3.maps with value in array with maps:
for this purpose need:
1.create map with arrays type value
2.put as value maps_arrays fild with array from maps_arrays type fild which should already contain map of the type you need (there are not all map type fields here you can add a map of the required type by adding a corresponding field of map_arrays type.)
3.next you can get this array from map with help standard get function, by specifying the type field of your array
Polyline PlusThis library introduces the `PolylinePlus` type, which is an enhanced version of the built-in PineScript `polyline`. It enables two features that are absent from the built-in type:
1. Developers can now efficiently add or remove points from the polyline. In contrast, the built-in `polyline` type is immutable, requiring developers to create a new instance of the polyline to make changes, which is cumbersome and incurs a significant performance penalty.
2. Each `PolylinePlus` instance can theoretically hold up to ~1M points, surpassing the built-in `polyline` type's limit of 10K points, as long as it does not exceed the memory limit of the PineScript runtime.
Internally, each `PolylinePlus` instance utilizes an array of `line`s and an array of `polyline`s. The `line`s array serves as a buffer to store lines formed by recently added points. When the buffer reaches its capacity, it flushes the contents and converts the lines into polylines. These polylines are expected to undergo fewer updates. This approach is similiar to the concept of "Buffered I/O" in file and network systems. By connecting the underlying lines and polylines, this library achieves an enhanced polyline that is dynamic, efficient, and capable of surpassing the maximum number of points imposed by the built-in polyline.
🔵 API
Step 1: Import this library
import algotraderdev/polylineplus/1 as pp
// remember to check the latest version of this library and replace the 1 above.
Step 2: Initialize the `PolylinePlus` type.
var p = pp.PolylinePlus.new()
There are a few optional params that developers can specify in the constructor to modify the behavior and appearance of the polyline instance.
var p = pp.PolylinePlus.new(
// If true, the drawing will also connect the first point to the last point, resulting in a closed polyline.
closed = false,
// Determines the field of the chart.point objects that the polyline will use for its x coordinates. Either xloc.bar_index (default), or xloc.bar_time.
xloc = xloc.bar_index,
// Color of the polyline. Default is blue.
line_color = color.blue,
// Style of the polyline. Default is line.style_solid.
line_style = line.style_solid,
// Width of the polyline. Default is 1.
line_width = 1,
// The maximum number of points that each built-in `polyline` instance can contain.
// NOTE: this is not to be confused with the maximum of points that each `PolylinePlus` instance can contain.
max_points_per_builtin_polyline = 10000,
// The number of lines to keep in the buffer. If more points are to be added while the buffer is full, then all the lines in the buffer will be flushed into the poylines.
// The higher the number, the less frequent we'll need to // flush the buffer, and thus lead to better performance.
// NOTE: the maximum total number of lines per chart allowed by PineScript is 500. But given there might be other places where the indicator or strategy are drawing lines outside this polyline context, the default value is 50 to be safe.
lines_bffer_size = 50)
Step 3: Push / Pop Points
// Push a single point
p.push_point(chart.point.now())
// Push multiple points
chart.point points = array.from(p1, p2, p3) // Where p1, p2, p3 are all chart.point type.
p.push_points(points)
// Pop point
p.pop_point()
// Resets all the points in the polyline.
p.set_points(points)
// Deletes the polyline.
p.delete()
🔵 Benchmark
Below is a simple benchmark comparing the performance between `PolylinePlus` and the native `polyline` type for incrementally adding 10K points to a polyline.
import algotraderdev/polylineplus/2 as pp
var t1 = 0
var t2 = 0
if bar_index < 10000
int start = timenow
var p = pp.PolylinePlus.new(xloc = xloc.bar_time, closed = true)
p.push_point(chart.point.now())
t1 += timenow - start
start := timenow
var polyline pl = na
var points = array.new()
points.push(chart.point.now())
if not na(pl)
pl.delete()
pl := polyline.new(points)
t2 += timenow - start
if barstate.islast
log.info('{0} {1}', t1, t2)
For this benchmark, `PolylinePlus` took ~300ms, whereas the native `polyline` type took ~6000ms.
We can also fine-tune the parameters for `PolylinePlus` to have a larger buffer size for `line`s and a smaller buffer for `polyline`s.
var p = pp.PolylinePlus.new(xloc = xloc.bar_time, closed = true, lines_buffer_size = 500, max_points_per_builtin_polyline = 1000)
With the above optimization, it only took `PolylinePlus` ~80ms to process the same 10K points, which is ~75x the performance compared to the native `polyline`.
SPTS_StatsPakLibFinally getting around to releasing the library component to the SPTS indicator!
This library is packed with a ton of great statistics functions to supplement SPTS, these functions add to the capabilities of SPTS including a forecast function.
The library includes the following functions
1. Linear Regression (single independent and single dependent)
2. Multiple Regression (2 independent variables, 1 dependent)
3. Standard Error of Residual Assessment
4. Z-Score
5. Effect Size
6. Confidence Interval
7. Paired Sample Test
8. Two Tailed T-Test
9. Qualitative assessment of T-Test
10. T-test table and p value assigner
11. Correlation of two arrays
12. Quadratic correlation (curvlinear)
13. R Squared value of 2 arrays
14. R Squared value of 2 floats
15. Test of normality
16. Forecast function which will push the desired forecasted variables into an array.
One of the biggest added functionalities of this library is the forecasting function.
This function provides an autoregressive, trainable model that will export forecasted values to 3 arrays, one contains the autoregressed forecasted results, the other two contain the upper confidence forecast and the lower confidence forecast.
Hope you enjoy and find use for this!
Library "SPTS_StatsPakLib"
f_linear_regression(independent, dependent, len, variable)
TODO: creates a simple linear regression model between two variables.
Parameters:
independent (float)
dependent (float)
len (int)
variable (float)
Returns: TODO: returns 6 float variables
result: The result of the regression model
pear_cor: The pearson correlation of the regresion model
rsqrd: the R2 of the regression model
std_err: the error of residuals
slope: the slope of the model (coefficient)
intercept: the intercept of the model (y = mx + b is y = slope x + intercept)
f_multiple_regression(y, x1, x2, input1, input2, len)
TODO: creates a multiple regression model between two independent variables and 1 dependent variable.
Parameters:
y (float)
x1 (float)
x2 (float)
input1 (float)
input2 (float)
len (int)
Returns: TODO: returns 7 float variables
result: The result of the regression model
pear_cor: The pearson correlation of the regresion model
rsqrd: the R2 of the regression model
std_err: the error of residuals
b1 & b2: the slopes of the model (coefficients)
intercept: the intercept of the model (y = mx + b is y = b1 x + b2 x + intercept)
f_stanard_error(result, dependent, length)
x TODO: performs an assessment on the error of residuals, can be used with any variable in which there are residual values (such as moving averages or more comlpex models)
param x TODO: result is the output, for example, if you are calculating the residuals of a 200 EMA, the result would be the 200 EMA
dependent: is the dependent variable. In the above example with the 200 EMA, your dependent would be the source for your 200 EMA
Parameters:
result (float)
dependent (float)
length (int)
Returns: x TODO: the standard error of the residual, which can then be multiplied by standard deviations or used as is.
f_zscore(variable, length)
TODO: Calculates the z-score
Parameters:
variable (float)
length (int)
Returns: TODO: returns float z-score
f_effect_size(array1, array2)
TODO: Calculates the effect size between two arrays of equal scale.
Parameters:
array1 (float )
array2 (float )
Returns: TODO: returns the effect size (float)
f_confidence_interval(array1, array2, ci_input)
TODO: Calculates the confidence interval between two arrays
Parameters:
array1 (float )
array2 (float )
ci_input (float)
Returns: TODO: returns the upper_bound and lower_bound cofidence interval as float values
paired_sample_t(src1, src2, len)
TODO: Performs a paired sample t-test
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: Returns the t-statistic and degrees of freedom of a paired sample t-test
two_tail_t_test(array1, array2)
TODO: Perofrms a two tailed t-test
Parameters:
array1 (float )
array2 (float )
Returns: TODO: Returns the t-statistic and degrees of freedom of a two_tail_t_test sample t-test
t_table_analysis(t_stat, df)
TODO: This is to make a qualitative assessment of your paired and single sample t-test
Parameters:
t_stat (float)
df (float)
Returns: TODO: the function will return 2 string variables and 1 colour variable. The 2 string variables indicate whether the results are significant or not and the colour will
output red for insigificant and green for significant
t_table_p_value(df, t_stat)
TODO: This performs a quantaitive assessment on your t-tests to determine the statistical significance p value
Parameters:
df (float)
t_stat (float)
Returns: TODO: The function will return 1 float variable, the p value of the t-test.
cor_of_array(array1, array2)
TODO: This performs a pearson correlation assessment of two arrays. They need to be of equal size!
Parameters:
array1 (float )
array2 (float )
Returns: TODO: The function will return the pearson correlation.
quadratic_correlation(src1, src2, len)
TODO: This performs a quadratic (curvlinear) pearson correlation between two values.
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: The function will return the pearson correlation (quadratic based).
f_r2_array(array1, array2)
TODO: Calculates the r2 of two arrays
Parameters:
array1 (float )
array2 (float )
Returns: TODO: returns the R2 value
f_rsqrd(src1, src2, len)
TODO: Calculates the r2 of two float variables
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: returns the R2 value
test_of_normality(array, src)
TODO: tests the normal distribution hypothesis
Parameters:
array (float )
src (float)
Returns: TODO: returns 4 variables, 2 float and 2 string
Skew: the skewness of the dataset
Kurt: the kurtosis of the dataset
dist = the distribution type (recognizes 7 different distribution types)
implication = a string assessment of the implication of the distribution (qualitative)
f_forecast(output, input, train_len, forecast_length, output_array, upper_array, lower_array)
TODO: This performs a simple forecast function on a single dependent variable. It will autoregress this based on the train time, to the desired length of output,
then it will push the forecasted values to 3 float arrays, one that contains the forecasted result, 1 that contains the Upper Confidence Result and one with the lower confidence
result.
Parameters:
output (float)
input (float)
train_len (int)
forecast_length (int)
output_array (float )
upper_array (float )
lower_array (float )
Returns: TODO: Will return 3 arrays, one with the forecasted results, one with the upper confidence results, and a final with the lower confidence results. Example is given below.
mathLibrary "math"
TODO: Math custom MA and more
pine_ema(src, length)
Parameters:
src (float)
length (int)
pine_dema(src, length)
Parameters:
src (float)
length (int)
pine_tema(src, length)
Parameters:
src (float)
length (int)
pine_sma(src, length)
Parameters:
src (float)
length (int)
pine_smma(src, length)
Parameters:
src (float)
length (int)
pine_ssma(src, length)
Parameters:
src (float)
length (int)
pine_rma(src, length)
Parameters:
src (float)
length (int)
pine_wma(x, y)
Parameters:
x (float)
y (int)
pine_hma(src, length)
Parameters:
src (float)
length (int)
pine_vwma(x, y)
Parameters:
x (float)
y (int)
pine_swma(x)
Parameters:
x (float)
pine_alma(src, length, offset, sigma)
Parameters:
src (float)
length (int)
offset (float)
sigma (float)
EphemerisLibrary "Ephemeris"
TODO: add library description here
mercuryElements()
mercuryRates()
venusElements()
venusRates()
earthElements()
earthRates()
marsElements()
marsRates()
jupiterElements()
jupiterRates()
saturnElements()
saturnRates()
uranusElements()
uranusRates()
neptuneElements()
neptuneRates()
rev360(x)
Normalize degrees to within [0, 360)
Parameters:
x (float) : degrees to be normalized
Returns: Normalized degrees
scaleAngle(longitude, magnitude, harmonic)
Scale angle in degrees
Parameters:
longitude (float)
magnitude (float)
harmonic (int)
Returns: Scaled angle in degrees
julianCenturyInJulianDays()
Constant Julian days per century
Returns: 36525
julianEpochJ2000()
Julian date on J2000 epoch start (2000-01-01)
Returns: 2451545.0
meanObliquityForJ2000()
Mean obliquity of the ecliptic on J2000 epoch start (2000-01-01)
Returns: 23.43928
getJulianDate(Year, Month, Day, Hour, Minute)
Convert calendar date to Julian date
Parameters:
Year (int) : calendar year as integer (e.g. 2018)
Month (int) : calendar month (January = 1, December = 12)
Day (int) : calendar day of month (e.g. January valid days are 1-31)
Hour (int) : valid values 0-23
Minute (int) : valid values 0-60
julianCenturies(date, epoch_start)
Centuries since Julian Epoch 2000-01-01
Parameters:
date (float) : Julian date to conver to Julian centuries
epoch_start (float) : Julian date of epoch start (e.g. J2000 epoch = 2451545)
Returns: Julian date converted to Julian centuries
julianCenturiesSinceEpochJ2000(julianDate)
Calculate Julian centuries since epoch J2000 (2000-01-01)
Parameters:
julianDate (float) : Julian Date in days
Returns: Julian centuries since epoch J2000 (2000-01-01)
atan2(y, x)
Specialized arctan function
Parameters:
y (float) : radians
x (float) : radians
Returns: special arctan of y/x
eccAnom(ec, m_param, dp)
Compute eccentricity of the anomaly
Parameters:
ec (float) : Eccentricity of Orbit
m_param (float) : Mean Anomaly ?
dp (int) : Decimal places to round to
Returns: Eccentricity of the Anomaly
planetEphemerisCalc(TGen, planetElementId, planetRatesId)
Compute planetary ephemeris (longtude relative to Earth or Sun) on a Julian date
Parameters:
TGen (float) : Julian Date
planetElementId (float ) : All planet orbital elements in an array. This index references a specific planet's elements.
planetRatesId (float ) : All planet orbital rates in an array. This index references a specific planet's rates.
Returns: X,Y,Z ecliptic rectangular coordinates and R radius from reference body.
calculateRightAscensionAndDeclination(earthX, earthY, earthZ, planetX, planetY, planetZ)
Calculate right ascension and declination for a planet relative to Earth
Parameters:
earthX (float) : Earth X ecliptic rectangular coordinate relative to Sun
earthY (float) : Earth Y ecliptic rectangular coordinate relative to Sun
earthZ (float) : Earth Z ecliptic rectangular coordinate relative to Sun
planetX (float) : Planet X ecliptic rectangular coordinate relative to Sun
planetY (float) : Planet Y ecliptic rectangular coordinate relative to Sun
planetZ (float) : Planet Z ecliptic rectangular coordinate relative to Sun
Returns: Planet geocentric orbital radius, geocentric right ascension, and geocentric declination
mercuryHelio(T)
Compute Mercury heliocentric longitude on date
Parameters:
T (float)
Returns: Mercury heliocentric longitude on date
venusHelio(T)
Compute Venus heliocentric longitude on date
Parameters:
T (float)
Returns: Venus heliocentric longitude on date
earthHelio(T)
Compute Earth heliocentric longitude on date
Parameters:
T (float)
Returns: Earth heliocentric longitude on date
marsHelio(T)
Compute Mars heliocentric longitude on date
Parameters:
T (float)
Returns: Mars heliocentric longitude on date
jupiterHelio(T)
Compute Jupiter heliocentric longitude on date
Parameters:
T (float)
Returns: Jupiter heliocentric longitude on date
saturnHelio(T)
Compute Saturn heliocentric longitude on date
Parameters:
T (float)
Returns: Saturn heliocentric longitude on date
neptuneHelio(T)
Compute Neptune heliocentric longitude on date
Parameters:
T (float)
Returns: Neptune heliocentric longitude on date
uranusHelio(T)
Compute Uranus heliocentric longitude on date
Parameters:
T (float)
Returns: Uranus heliocentric longitude on date
sunGeo(T)
Parameters:
T (float)
mercuryGeo(T)
Parameters:
T (float)
venusGeo(T)
Parameters:
T (float)
marsGeo(T)
Parameters:
T (float)
jupiterGeo(T)
Parameters:
T (float)
saturnGeo(T)
Parameters:
T (float)
neptuneGeo(T)
Parameters:
T (float)
uranusGeo(T)
Parameters:
T (float)
moonGeo(T_JD)
Parameters:
T_JD (float)
mercuryOrbitalPeriod()
Mercury orbital period in Earth days
Returns: 87.9691
venusOrbitalPeriod()
Venus orbital period in Earth days
Returns: 224.701
earthOrbitalPeriod()
Earth orbital period in Earth days
Returns: 365.256363004
marsOrbitalPeriod()
Mars orbital period in Earth days
Returns: 686.980
jupiterOrbitalPeriod()
Jupiter orbital period in Earth days
Returns: 4332.59
saturnOrbitalPeriod()
Saturn orbital period in Earth days
Returns: 10759.22
uranusOrbitalPeriod()
Uranus orbital period in Earth days
Returns: 30688.5
neptuneOrbitalPeriod()
Neptune orbital period in Earth days
Returns: 60195.0
jupiterSaturnCompositePeriod()
jupiterNeptuneCompositePeriod()
jupiterUranusCompositePeriod()
saturnNeptuneCompositePeriod()
saturnUranusCompositePeriod()
planetSineWave(julianDateInCenturies, planetOrbitalPeriod, planetHelio)
Convert heliocentric longitude of planet into a sine wave
Parameters:
julianDateInCenturies (float)
planetOrbitalPeriod (float) : Orbital period of planet in Earth days
planetHelio (float) : Heliocentric longitude of planet in degrees
Returns: Sine of heliocentric longitude on a Julian date
WIPFunctionLyaponovLibrary "WIPFunctionLyaponov"
Lyapunov exponents are mathematical measures used to describe the behavior of a system over
time. They are named after Russian mathematician Alexei Lyapunov, who first introduced the concept in the
late 19th century. The exponent is defined as the rate at which a particular function or variable changes
over time, and can be positive, negative, or zero.
Positive exponents indicate that a system tends to grow or expand over time, while negative exponents
indicate that a system tends to shrink or decay. Zero exponents indicate that the system does not change
significantly over time. Lyapunov exponents are used in various fields of science and engineering, including
physics, economics, and biology, to study the long-term behavior of complex systems.
~ generated description from vicuna13b
---
To calculate the Lyapunov Exponent (LE) of a given Time Series, we need to follow these steps:
1. Firstly, you should have access to your data in some format like CSV or Excel file. If not, then you can collect it manually using tools such as stopwatches and measuring tapes.
2. Once the data is collected, clean it up by removing any outliers that may skew results. This step involves checking for inconsistencies within your dataset (e.g., extremely large or small values) and either discarding them entirely or replacing with more reasonable estimates based on surrounding values.
3. Next, you need to determine the dimension of your time series data. In most cases, this will be equal to the number of variables being measured in each observation period (e.g., temperature, humidity, wind speed).
4. Now that we have a clean dataset with known dimensions, we can calculate the LE for our Time Series using the following formula:
λ = log(||M^T * M - I||)/log(||v||)
where:
λ (Lyapunov Exponent) is the quantity that will be calculated.
||...|| denotes an Euclidean norm of a vector or matrix, which essentially means taking the square root of the sum of squares for each element in the vector/matrix.
M represents our Jacobian Matrix whose elements are given by:
J_ij = (∂fj / ∂xj) where fj is the jth variable and xj is the ith component of the initial condition vector x(t). In other words, each element in this matrix represents how much a small change in one variable affects another.
I denotes an identity matrix whose elements are all equal to 1 (or any constant value if you prefer). This term essentially acts as a baseline for comparison purposes since we want our Jacobian Matrix M^T * M to be close to it when the system is stable and far away from it when the system is unstable.
v represents an arbitrary vector whose Euclidean norm ||v|| will serve as a scaling factor in our calculation. The choice of this particular vector does not matter since we are only interested in its magnitude (i.e., length) for purposes of normalization. However, if you want to ensure that your results are accurate and consistent across different datasets or scenarios, it is recommended to use the same initial condition vector x(t) as used earlier when calculating our Jacobian Matrix M.
5. Finally, once we have calculated λ using the formula above, we can interpret its value in terms of stability/instability for our Time Series data:
- If λ < 0, then this indicates that the system is stable (i.e., nearby trajectories will converge towards each other over time).
- On the other hand, if λ > 0, then this implies that the system is unstable (i.e., nearby trajectories will diverge away from one another over time).
~ generated description from airoboros33b
---
Reference:
en.wikipedia.org
www.collimator.ai
blog.abhranil.net
www.researchgate.net
physics.stackexchange.com
---
This is a work in progress, it may contain errors so use with caution.
If you find flaws or suggest something new, please leave a comment bellow.
_measure_function(i)
helper function to get the name of distance function by a index (0 -> 13).\
Functions: SSD, Euclidean, Manhattan, Minkowski, Chebyshev, Correlation, Cosine, Camberra, MAE, MSE, Lorentzian, Intersection, Penrose Shape, Meehl.
Parameters:
i (int)
_test(L)
Helper function to test the output exponents state system and outputs description into a string.
Parameters:
L (float )
estimate(X, initial_distance, distance_function)
Estimate the Lyaponov Exponents for multiple series in a row matrix.
Parameters:
X (map)
initial_distance (float) : Initial distance limit.
distance_function (string) : Name of the distance function to be used, default:`ssd`.
Returns: List of Lyaponov exponents.
max(L)
Maximal Lyaponov Exponent.
Parameters:
L (float ) : List of Lyapunov exponents.
Returns: Highest exponent.
Contrast Color LibraryThis lightweight library provides a utility method that analyzes any provided background color and automatically chooses the optimal black or white foreground color to ensure maximum visual contrast and readability.
🟠 Algorithm
The library utilizes the HSP Color Model to calculate the brightness of the background color. The formula for this calculation is as follows:
brightness = sqrt(0.299 * R^2 + 0.587 * G^2 + 0.114 * B^2 )
The library chooses black as the foreground color if the brightness exceeds the threshold (default 0.5), and white otherwise.
lib_profileLibrary "lib_profile"
a library with functions to calculate a volume profile for either a set of candles within the current chart, or a single candle from its lower timeframe security data. All you need is to feed the
method delete(this)
deletes this bucket's plot from the chart
Namespace types: Bucket
Parameters:
this (Bucket)
method delete(this)
Namespace types: Profile
Parameters:
this (Profile)
method delete(this)
Namespace types: Bucket
Parameters:
this (Bucket )
method delete(this)
Namespace types: Profile
Parameters:
this (Profile )
method update(this, top, bottom, value, fraction)
updates this bucket's data
Namespace types: Bucket
Parameters:
this (Bucket)
top (float)
bottom (float)
value (float)
fraction (float)
method update(this, tops, bottoms, values)
update this Profile's data (recalculates the whole profile and applies the result to this object) TODO optimisation to calculate this incremental to improve performance in realtime on high resolution
Namespace types: Profile
Parameters:
this (Profile)
tops (float ) : array of range top/high values (either from ltf or chart candles using history() function
bottoms (float ) : array of range bottom/low values (either from ltf or chart candles using history() function
values (float ) : array of range volume/1 values (either from ltf or chart candles using history() function (1s can be used for analysing candles in bucket/price range over time)
method tostring(this)
allows debug print of a bucket
Namespace types: Bucket
Parameters:
this (Bucket)
method draw(this, start_t, start_i, end_t, end_i, args, line_color)
allows drawing a line in a Profile, representing this bucket and it's value + it's value's fraction of the Profile total value
Namespace types: Bucket
Parameters:
this (Bucket)
start_t (int) : the time x coordinate of the line's left end (depends on the Profile box)
start_i (int) : the bar_index x coordinate of the line's left end (depends on the Profile box)
end_t (int) : the time x coordinate of the line's right end (depends on the Profile box)
end_i (int) : the bar_index x coordinate of the line's right end (depends on the Profile box)
args (LineArgs type from robbatt/lib_plot_objects/24) : the default arguments for the line style
line_color (color) : the color override for POC/VAH/VAL lines
method draw(this, forced_width)
draw all components of this Profile (Box, Background, Bucket lines, POC/VAH/VAL overlay levels and labels)
Namespace types: Profile
Parameters:
this (Profile)
forced_width (int) : allows to force width of the Profile Box, overrides the ProfileArgs.default_size and ProfileArgs.extend arguments (default: na)
method init(this)
Namespace types: ProfileArgs
Parameters:
this (ProfileArgs)
method init(this)
Namespace types: Profile
Parameters:
this (Profile)
profile(tops, bottoms, values, resolution, vah_pc, val_pc, bucket_buffer)
split a chart/parent bar into 'resolution' sections, figure out in which section the most volume/time was spent, by analysing a given set of (intra)bars' top/bottom/volume values. Then return price center of the bin with the highest volume, essentially marking the point of control / highest volume (poc) in the chart/parent bar.
Parameters:
tops (float ) : array of range top/high values (either from ltf or chart candles using history() function
bottoms (float ) : array of range bottom/low values (either from ltf or chart candles using history() function
values (float ) : array of range volume/1 values (either from ltf or chart candles using history() function (1s can be used for analysing candles in bucket/price range over time)
resolution (int) : amount of buckets/price ranges to sort the candle data into (analyse how much volume / time was spent in a certain bucket/price range) (default: 25)
vah_pc (float) : a threshold percentage (of values' total) for the top end of the value area (default: 80)
val_pc (float) : a threshold percentage (of values' total) for the bottom end of the value area (default: 20)
bucket_buffer (Bucket ) : optional buffer of empty Buckets to fill, if omitted a new one is created and returned. The buffer length must match the resolution
Returns: poc (price level), vah (price level), val (price level), poc_index (idx in buckets), vah_index (idx in buckets), val_index (idx in buckets), buckets (filled buffer or new)
create_profile(start_idx, tops, bottoms, values, resolution, vah_pc, val_pc, args)
split a chart/parent bar into 'resolution' sections, figure out in which section the most volume/time was spent, by analysing a given set of (intra)bars' top/bottom/volume values. Then return price center of the bin with the highest volume, essentially marking the point of control / highest volume (poc) in the chart/parent bar.
Parameters:
start_idx (int) : the bar_index at which the Profile should start drawing
tops (float ) : array of range top/high values (either from ltf or chart candles using history() function
bottoms (float ) : array of range bottom/low values (either from ltf or chart candles using history() function
values (float ) : array of range volume/1 values (either from ltf or chart candles using history() function (1s can be used for analysing candles in bucket/price range over time)
resolution (int) : amount of buckets/price ranges to sort the candle data into (analyse how much volume / time was spent in a certain bucket/price range) (default: 25)
vah_pc (float) : a threshold percentage (of values' total) for the top end of the value area (default: 80)
val_pc (float) : a threshold percentage (of values' total) for the bottom end of the value area (default: 20)
args (ProfileArgs)
Returns: poc (price level), vah (price level), val (price level), poc_index (idx in buckets), vah_index (idx in buckets), val_index (idx in buckets), buckets (filled buffer or new)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (int)
len (int)
offset (int)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (float)
len (int)
offset (int)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (bool)
len (int)
offset (int)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (string)
len (int)
offset (int)
Bucket
Fields:
idx (series int) : the index of this Bucket within the Profile starting with 0 for the lowest Bucket at the bottom of the Profile
value (series float) : the value of this Bucket, can be volume or time, for using time pass and array of 1s to the update function
top (series float) : the top of this Bucket's price range (for calculation)
btm (series float) : the bottom of this Bucket's price range (for calculation)
center (series float) : the center of this Bucket's price range (for plotting)
fraction (series float) : the fraction this Bucket's value is compared to the total of the Profile
plot_bucket_line (Line type from robbatt/lib_plot_objects/24) : the line that resembles this bucket and it's valeu in the Profile
ProfileArgs
Fields:
show_poc (series bool) : whether to plot a POC line across the Profile Box (default: true)
show_profile (series bool) : whether to plot a line for each Bucket in the Profile Box, indicating the value per Bucket (Price range), e.g. volume that occured in a certain time and price range (default: false)
show_va (series bool) : whether to plot a VAH/VAL line across the Profile Box (default: false)
show_va_fill (series bool) : whether to fill the 'value' area between VAH/VAL line (default: false)
show_background (series bool) : whether to fill the Profile Box with a background color (default: false)
show_labels (series bool) : whether to add labels to the right end of the POC/VAH/VAL line (default: false)
show_price_levels (series bool) : whether add price values to the labels to the right end of the POC/VAH/VAL line (default: false)
extend (series bool) : whether extend the Profile Box to the current candle (default: false)
default_size (series int) : the default min. width of the Profile Box (default: 30)
args_poc_line (LineArgs type from robbatt/lib_plot_objects/24) : arguments for the poc line plot
args_va_line (LineArgs type from robbatt/lib_plot_objects/24) : arguments for the va line plot
args_poc_label (LabelArgs type from robbatt/lib_plot_objects/24) : arguments for the poc label plot
args_va_label (LabelArgs type from robbatt/lib_plot_objects/24) : arguments for the va label plot
args_profile_line (LineArgs type from robbatt/lib_plot_objects/24) : arguments for the Bucket line plots
args_profile_bg (BoxArgs type from robbatt/lib_plot_objects/24)
va_fill_color (series color) : color for the va area fill plot
Profile
Fields:
start (series int) : left x coordinate for the Profile Box
end (series int) : right x coordinate for the Profile Box
resolution (series int) : the amount of buckets/price ranges the Profile will dissect the data into
vah_threshold_pc (series float) : the percentage of the total data value to mark the upper threshold for the main value area
val_threshold_pc (series float) : the percentage of the total data value to mark the lower threshold for the main value area
args (ProfileArgs) : the style arguments for the Profile Box
h (series float) : the highest price of the data
l (series float) : the lowest price of the data
total (series float) : the total data value (e.g. volume of all candles, or just one each to analyse candle distribution over time)
buckets (Bucket ) : the Bucket objects holding the data for each price range bucket
poc_bucket_index (series int) : the Bucket index in buckets, that holds the poc Bucket
vah_bucket_index (series int) : the Bucket index in buckets, that holds the vah Bucket
val_bucket_index (series int) : the Bucket index in buckets, that holds the val Bucket
poc (series float) : the according price level marking the Point Of Control
vah (series float) : the according price level marking the Value Area High
val (series float) : the according price level marking the Value Area Low
plot_poc (Line type from robbatt/lib_plot_objects/24)
plot_vah (Line type from robbatt/lib_plot_objects/24)
plot_val (Line type from robbatt/lib_plot_objects/24)
plot_poc_label (Label type from robbatt/lib_plot_objects/24)
plot_vah_label (Label type from robbatt/lib_plot_objects/24)
plot_val_label (Label type from robbatt/lib_plot_objects/24)
plot_va_fill (LineFill type from robbatt/lib_plot_objects/24)
plot_profile_bg (Box type from robbatt/lib_plot_objects/24)
two_ma_logicLibrary "two_ma_logic"
The core logic for the two moving average strategy that is used as an example for the internal logic of
the "Template Trailing Strategy" and the "Two MA Signal Indicator"
ma(source, maType, length)
ma - Calculate the moving average of the given source for the given length and type of the average
Parameters:
source (float) : - The source of the values
maType (simple string) : - The type of the moving average
length (simple int) : - The length of the moving average
Returns: - The resulted value of the calculations of the moving average
getDealConditions(drawings, longDealsEnabled, shortDealsEnabled, endDealsEnabled, cnlStartDealsEnabled, cnlEndDealsEnabled, emaFilterEnabled, emaAtrBandEnabled, adxFilterEnabled, adxSmoothing, diLength, adxThreshold)
Parameters:
drawings (TwoMaDrawings)
longDealsEnabled (simple bool)
shortDealsEnabled (simple bool)
endDealsEnabled (simple bool)
cnlStartDealsEnabled (simple bool)
cnlEndDealsEnabled (simple bool)
emaFilterEnabled (simple bool)
emaAtrBandEnabled (simple bool)
adxFilterEnabled (simple bool)
adxSmoothing (simple int)
diLength (simple int)
adxThreshold (simple float)
TwoMaDrawings
Fields:
fastMA (series__float)
slowMA (series__float)
emaLine (series__float)
emaUpperBand (series__float)
emaLowerBand (series__float)
tts_conventionLibrary "tts_convention"
This library can convert the start, end, cancel start, cancel end deal conditions that are used in the
"Template Trailing Strategy" script into a signal value and vice versa. The "two channels mod div" convention is unsed
internaly and the signal value can be composed/decomposed into two channels that contain the afforementioned actions
for long and short positions separetely.
getDealConditions(signal)
getDealConditions - Get the start, end, cancel start and cancel end deal conditions that are used in the "Template Trailing Strategy" script by decomposing the given signal
Parameters:
signal (int) : - The signal value to decompose
Returns: An object with the start, end, cancel start and cancel end deal conditions for long and short
getSignal(dealConditions)
getSignal - Get the signal value from the composition of the start, end, cancel start and cancel end deal conditions that are used in the "Template Trailing Strategy" script
Parameters:
dealConditions (DealConditions) : - The deal conditions object that containd the start, end, cancel start and cancel end deal conditions for long and short
Returns: The composed signal value
DealConditions
Fields:
startLongDeal (series__bool)
startShortDeal (series__bool)
endLongDeal (series__bool)
endShortDeal (series__bool)
cnlStartLongDeal (series__bool)
cnlStartShortDeal (series__bool)
cnlEndLongDeal (series__bool)
cnlEndShortDeal (series__bool)