API Documentation#

feste – Main package#

feste.compute(*args, scheduler_fn: ~typing.Callable = <function get_multiprocessing>, optimize_graph: bool = True, **kwargs) Any[source]#

This function will compute the given objects using the default multiprocessing scheduler.

Parameters:
  • scheduler_fn – a scheduler (defaults to multiprocessing scheduler)

  • optimize_graph – if graph should be optimized

Returns:

computed objects

feste.prompt – Prompting#

class feste.prompt.FesteEnvironment(**kwargs)[source]#

Bases: Environment

This is the default Feste environment, it adds Feste’s global utilities into the Jinja2 environment.

add_feste_globals() None[source]#
exception feste.prompt.LanguageMismatch[source]#

Bases: UserWarning

Exception when languages are mixed across prompts.

class feste.prompt.Prompt(template: str, language: str | Language = 'en', environment: Environment | None = None)[source]#

Bases: FesteBase

Prompt utility. This class represents a prompt and its associated language and environment.

Parameters:
  • template – the prompt template (in Jinja2 format)

  • language – language code, defaults to en (follows ISO639)

  • environment – optional environment, defaults to Feste’s env.

classmethod from_file(filename: Path | str, **kwargs)[source]#

Loads the prompt from a text file.

Parameters:
  • filename – the filename or Python’s native Path object.

  • kwargs – extra arguments being passed to the Prompt constructor.

property language: Language#

Returns the ISO639 language code of the prompt.

variables() set[str][source]#

Return a list of variables present in the template.

Returns:

set of variables.

feste.graph – Graph and graph manipulation#

class feste.graph.FesteGraph(graph: dict[str, Any])[source]#

Bases: Mapping

A computational graph representing the flow described by the call of Feste tasks.

Parameters:

graph – initialize from a dictionary.

classmethod collect(*args) tuple['FesteGraph', list, callable][source]#

Create a Feste graph from a collection of objects.

Parameters:

args – collection of objects.

Returns:

Tuple (Graph, collections, repack function)

dagviz_metro(svg_handle: TextIO) None[source]#

Writes a metro style dag visualization using dagviz.

Parameters:

svg_handle – the svg file type object

Returns:

svg content

get_all_dependencies() dict[str, str][source]#

Returns a dict with all dependencies.

order() dict[str, int][source]#

Return the execution order hint.

print() None[source]#

Print the internal graph representation.

to_dict() dict[source]#

Convert the graph to a dictionary.

topological_sorter() TopologicalSorter[source]#

Returns a topological sorter from the graph dependencies.

update(graph_dict: dict[str, Any]) None[source]#

Update the internal graph from a dictionary.

Parameters:

graph_dict – dictionary to update from.

visualize(filename: str) None[source]#

Export the graph into a file.

Parameters:

filename – filename to export the graph (e.g. .pdf, .png)

feste.compute – Feste computing#

feste.compute.compute(*args, scheduler_fn: ~typing.Callable = <function get_multiprocessing>, optimize_graph: bool = True, **kwargs) Any[source]

This function will compute the given objects using the default multiprocessing scheduler.

Parameters:
  • scheduler_fn – a scheduler (defaults to multiprocessing scheduler)

  • optimize_graph – if graph should be optimized

Returns:

computed objects

feste.scheduler – Feste scheduler#

feste.scheduler.get_async(submit, num_workers, dsk, result, cache=None, get_id=<function default_get_id>, rerun_exceptions_locally=None, pack_exception=<function default_pack_exception>, raise_exception=<function reraise>, callbacks=None, dumps=<function identity>, loads=<function identity>, chunksize=None, **kwargs)[source]#

This is mostly Dask’s get_async with changes to introduce optimization during execution, with batching being an example.

feste.scheduler.get_multiprocessing(dsk: Mapping, keys: Sequence[Hashable] | Hashable, num_workers=None, func_loads=None, func_dumps=None, optimize_graph=True, pool=None, initializer=None, chunksize=None, **kwargs)[source]#

feste.task – Feste tasking#

class feste.task.FesteBase[source]#

Bases: object

Feste Base class that is used for backends. Every backend that is added in Feste needs to inherit from this class as it will add support for eager execution and optimizations.

classmethod optimizations() list[feste.optimization.Optimization][source]#
class feste.task.FesteDelayed(key, dsk, length=None, layer=None)[source]#

Bases: Delayed

Feste delayed is a lazy-evaluation node in Feste’s graph.

compute(**kwargs) Any[source]#

Compute this dask collection

This turns a lazy Dask collection into its in-memory equivalent. For example a Dask array turns into a NumPy array and a Dask dataframe turns into a Pandas dataframe. The entire dataset must fit into memory before calling this operation.

Parameters#

schedulerstring, optional

Which scheduler to use like “threads”, “synchronous” or “processes”. If not provided, the default is to check the global settings first, and then fall back to the collection defaults.

optimize_graphbool, optional

If True [default], the graph is optimized before computation. Otherwise the graph is run as is. This can be useful for debugging.

kwargs

Extra keywords to forward to the scheduler function.

See Also#

dask.base.compute

class feste.task.FesteDelayedLeaf(obj: Any, key: Any, pure: bool | None = None, nout: int | None = None)[source]#

Bases: FesteDelayed

property dask: Any#
feste.task.call_function(func, func_token, args, kwargs, pure=None, nout=None) Any[source]#
feste.task.feste_task(obj: Any = '__no__default__', name: Any | None = None, pure: bool | None = None, nout: int | None = None, traverse: bool = True) FesteDelayed[source]#

Function and decorator that can be used to introduce the lazy-evaluation nodes of computation using Feste’s graph.

feste.context – Context management (configuration)#

feste.context.get(key: str) Any[source]#

Gets the global context configuration for a particular key.

Parameters:

key – configuration key.

Returns:

the configuration for the specified key.

class feste.context.set(**kwargs)[source]#

Bases: object

Creates a context to replace a configuration key.

Parameters:

kwargs – the configuration keys.

feste.optimization – Static and dynamic optimizations#

class feste.optimization.BatchOptimization(rewrite_rules: dict[Callable, Callable])[source]#

Bases: Optimization

This is a static optimization to do batching of calls statically. Another optimization is done during scheduling as tasks might get ready before/after.

Parameters:

rewrite_rules – rule that describes how to change a single call to a batched call for APIs that support it.

apply(graph: FesteGraph) FesteGraph[source]#

Apply the optimization into the graph and return a modified graph.

Parameters:

graph – Feste graph to optimize

Returns:

optimized graph

class feste.optimization.Optimization[source]#

Bases: ABC

Optimization abstract class. This class represents an optimization that can be applied on the Feste graph.

abstract apply(graph: FesteGraph) FesteGraph[source]#

Apply the optimization into the graph and return a modified graph.

Parameters:

graph – Feste graph to optimize

Returns:

optimized graph

class feste.optimization.Optimizer(optimizations: list[feste.optimization.Optimization])[source]#

Bases: object

This is Feste optimizer, it received a list of optimizations and apply these optimizations on a Feste graph.

Parameters:

optimizations – list Feste optimizations.

apply(graph: FesteGraph) FesteGraph[source]#

Apply all optimizations in the Feste graph.

Parameters:

graph – graph to optimize

Returns:

graph optimized (with all optimizations)

classmethod from_backends() Optimizer[source]#

Create the optimizer using all optimizations from classes that are inheriting from the backend FesteBase class.

feste.optimization.make_getitem_task(object: Any, index: int) Any[source]#

This function will create a new __getitem__ task which is used to get single values from return of fused calls.

Parameters:
  • object – the object to get the item from

  • index – which index to get

Returns:

a task tuple (function, object, index)

feste.backend.openai – OpenAI Backend#

class feste.backend.openai.CompleteParams(model: str = 'text-davinci-003', suffix: str | None = None, max_tokens: int = 16, temperature: float = 1.0, top_p: float = 1.0, n: int = 1, stream: bool = False, logprobs: int | None = None, echo: bool = False, stop: str | None = None, presence_penalty: float = 0.0, frequency_penalty: float = 0.0, best_of: int = 1, logit_bias: dict[str, int] | None = None, user: str | None = None)[source]#

Bases: NamedTuple

Parameters for the OpenAI Complete API.

best_of: int#

Alias for field number 12

echo: bool#

Alias for field number 8

frequency_penalty: float#

Alias for field number 11

logit_bias: dict[str, int] | None#

Alias for field number 13

logprobs: int | None#

Alias for field number 7

max_tokens: int#

Alias for field number 2

model: str#

Alias for field number 0

n: int#

Alias for field number 5

presence_penalty: float#

Alias for field number 10

stop: str | None#

Alias for field number 9

stream: bool#

Alias for field number 6

suffix: str | None#

Alias for field number 1

temperature: float#

Alias for field number 3

top_p: float#

Alias for field number 4

user: str | None#

Alias for field number 14

class feste.backend.openai.OpenAI(api_key: str, organization: str | None = None)[source]#

Bases: FesteBase

This is the OpenAI API main class.

Parameters:
  • api_key – the OpenAI API key

  • organization – optional organization

complete#

This is the OpenAI official complete() API.

Parameters:
  • prompt – input prompt text

  • complete_params – the API parameters (e.g. temperature, etc)

complete_batch#

This is the OpenAI official complete() API, but batched.

Parameters:
  • prompt – input prompt text list

  • complete_params – the API parameters (e.g. temperature, etc)

classmethod optimizations() list[feste.optimization.Optimization][source]#

Optimizations implemented for OpenAI API.

static set_api_key(api_key: str, organization: str | None = None) None[source]#

Sets the API key and organization in the OpenAI module.

Parameters:
  • api_key – the OpenAI API key

  • organization – optional organization

feste.backend.cohere – Cohere Backend#

class feste.backend.cohere.Cohere(api_key: str, client_name: str | None = None, check_api_key: bool = True, max_retries: int = 3)[source]#

Bases: FesteBase

This is the Cohere API main class.

Note

Note that the Cohere API uses an internal thread pool to do calls. This internal pool is replaced by a dummy one in Feste’s implementation because we are already parallelizing the calls from outside of Cohere API implementation.

Parameters:
  • api_key – the Cohere API key

  • client_name – optional client name

  • check_api_key – if API key should be checked (offline)

  • max_retries – default number of retries

generate#

This is the Cohere official generate() API.

Parameters:
  • prompt – input prompt text

  • complete_params – the API parameters (e.g. temperature, etc)

class feste.backend.cohere.DummyExecutor[source]#

Bases: Executor

submit(fn, *args, **kwargs) Future[source]#

Submits a callable to be executed with the given arguments.

Schedules the callable to be executed as fn(*args, **kwargs) and returns a Future instance representing the execution of the callable.

Returns:

A Future representing the given call.

class feste.backend.cohere.GenerateParams(prompt_vars: object = {}, model: str | None = 'xlarge', preset: str | None = None, num_generations: int | None = None, max_tokens: int | None = None, temperature: float | None = None, k: int | None = None, p: float | None = None, frequency_penalty: float | None = None, presence_penalty: float | None = None, end_sequences: list[str] | None = None, stop_sequences: list[str] | None = None, return_likelihoods: str | None = None, truncate: str | None = None, logit_bias: dict[int, float] = {})[source]#

Bases: NamedTuple

Parameters for the Cohere generate API.

end_sequences: list[str] | None#

Alias for field number 10

frequency_penalty: float | None#

Alias for field number 8

k: int | None#

Alias for field number 6

logit_bias: dict[int, float]#

Alias for field number 14

max_tokens: int | None#

Alias for field number 4

model: str | None#

Alias for field number 1

num_generations: int | None#

Alias for field number 3

p: float | None#

Alias for field number 7

presence_penalty: float | None#

Alias for field number 9

preset: str | None#

Alias for field number 2

prompt_vars: object#

Alias for field number 0

return_likelihoods: str | None#

Alias for field number 12

stop_sequences: list[str] | None#

Alias for field number 11

temperature: float | None#

Alias for field number 5

truncate: str | None#

Alias for field number 13