Skip to content

Releases: replicate/cog

v0.14.0-alpha1

06 Jan 12:37
693924b
Compare
Choose a tag to compare
v0.14.0-alpha1 Pre-release
Pre-release

Support for concurrent predictions

This release introduces support for concurrent processing of predictions through the use of an async predict function.

To enable the feature add the new concurrency.max entry to your cog.yaml file:

concurrency:
  max: 32

And update your predictor to use the async def predict syntax:

class Predictor(BasePredictor):
    async def setup(self) -> None:
        print("async setup is also supported...")

    async def predict(self) -> str:
        print("async predict");
        return "hello world";

Cog will now process up to 32 predictions simultaneously, once at capacity subsequent predictions will return a 409 HTTP response.

Iterators

If your model is currently using Iterator or ConcatenateIterator it will need to be updated to use AsyncIterator or AsyncConcatenateIterator respectively.

from cog import AsyncConcatenateIterator, BasePredictor

class Predict(BasePredictor):
    async def predict(self) -> AsyncConcatenateIterator[str]:
        for fruit in ["apple", "banana", "orange"]:
            yield fruit

Migrating from 0.10.0a

An earlier fork of cog with concurrency support was published under the 0.10.0a release channel. This is now unsupported and will receive no further updates. There are some breaking changes in the API that will be introduced with the release of the 0.14.0 beta. This alpha release is backwards compatible and you will see deprecation warnings when calling the deprecated functions.

  • emit_metric(name, value) - this has been replaced by current_scope().record_metric(name, value)

Note

Note that the use of current_scope is still experimental and will output warnings to the console. To suppress these you can ignore the ExperimentalFeatureWarning:

import warnings
from cog import ExperimentalFeatureWarning
warnings.filterwarnings("ignore", category=ExperimentalFeatureWarning)

Known limitations

  • An async setup method cannot be used without an async predict method. Supported combinations are: sync setup/sync predict, async setup/async predict and sync setup/async predict.
  • File uploads will block the event loop. If your model outputs File or Path types these will currently block the event loop. This may be an issue for large file outputs and will be fixed in a future release.

Other Changes

  • Change torch vision to 0.20.0 for torch 2.5.0 cpu by @8W9aG in #2074
  • Ignore files within a .git directory by @8W9aG in #2087
  • Add fast build flag to cog by @8W9aG in #2086
  • Make dockerfile generators abstract by @8W9aG in #2088
  • Do not run a separate python install stage by @8W9aG in #2094

Full Changelog: v0.13.6...v0.14.0-alpha1

v0.10.0-alpha27

04 Dec 10:50
Compare
Choose a tag to compare
v0.10.0-alpha27 Pre-release
Pre-release

Changelog

v0.13.6

03 Dec 16:32
8e9e53e
Compare
Choose a tag to compare

Changelog

v0.13.3

25 Nov 16:01
v0.13.3
Compare
Choose a tag to compare

This release includes an important bug fix around usage of requests to include explicit connection timeouts. Other changes include tidying related to the removal of python 3.7 support, adding the output of pip freeze as a docker image label, and some groundwork towards supporting concurrent predictions.

Changelog

  • 8e1091f Add lock to subscribers dictionary
  • 3d9c298 Add pip freeze to docker label (#2062)
  • 3e56e59 Always set timeout on requests (#2064)
  • 746ec53 Fix flake on test_path_temporary_files_are_removed (#2059)
  • 2bc4710 Make TestWorkerState aware of prediction tags
  • 425d5a2 More python 3.7 tidying (#2063)
  • 8630036 PR feedback
  • 9c894d6 Update Worker to support concurrent predictions
  • cf0f8b2 Update python/cog/server/worker.py
  • db1cbef make clear why we read the PredictionInput childworker event
  • 5f6a742 update TestWorkerState to support concurrent subscribers

v0.13.2

14 Nov 21:37
v0.13.2
d714a70
Compare
Choose a tag to compare

Changelog

  • d714a70 Add ability to wait for an environment (#1957)
  • 465afe1 Add environment variable backed properties to config (#2051)
  • c02a2b3 Add integration test for multiprocessing usage (#2046)

v0.13.1

13 Nov 16:53
v0.13.1
23aac48
Compare
Choose a tag to compare

Changelog

v0.13.0

05 Nov 18:16
v0.13.0
4de7f61
Compare
Choose a tag to compare

Changelog

v0.12.1

05 Nov 17:59
v0.12.1
94b71b8
Compare
Choose a tag to compare

Changelog

v0.12.0

30 Oct 18:06
v0.12.0
eb04c7b
Compare
Choose a tag to compare

Changelog

  • 5e2218f Add Setup Logging (#2018)
  • 5c1908f Add integration tests around stream redirection (#2027)
  • 2781f5c Add local ignore for integration test fixture outputs (#2039)
  • a5759db Downgrade typing-extensions to fix conflict with spacy and pydantic (#2033)
  • f8e3461 Drop python 3.7 from test matrix (#2028)
  • 7a9d694 Manually patch CuDNN in cuda base images index (#2036)
  • 5eb31ff Support async predictors (✨ again ✨) (#2025)
  • eb04c7b Update section in CONTRIBUTING about release tags (#2038)

v0.11.6

25 Oct 18:18
v0.11.6
8333a83
Compare
Choose a tag to compare

Changelog

This reverts a change introduced in v0.11.4 to begin supporting async predict functions. The changes to how output redirection is handled broke a subset of models that started subprocesses during setup which then used stdout and/or stderr at predict time.