Releases: FluxML/Flux.jl
Releases · FluxML/Flux.jl
v0.13.9
Flux v0.13.9
Closed issues:
- Iteration over
params(m)
in explicit mode gives no gradient (#2091) Flux.Optimise.update!
updating grads instead of params? (#2121)- Flux.reset! triggers a BoundsError (#2124)
Merged pull requests:
- Remove
train!
from quickstart example (#2110) (@mcabbott) - Re-organise "built-in layers" section (#2112) (@mcabbott)
- Narrower version of
@non_differentiable params
(#2118) (@mcabbott) - allow non-tuple data in the new train! (#2119) (@CarloLucibello)
- fix train! test (#2123) (@CarloLucibello)
- Move 5 tutorials from fluxml.github.io (#2125) (@mcabbott)
- Remove Flux.Data module (#2126) (@mcabbott)
- CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (#2128) (@github-actions[bot])
v0.13.8
v0.13.7
Flux v0.13.7
Closed issues:
- DimensionMismatch("array could not be broadcast to match destination") (#1457)
- Warn on
NaN
loss (#1981) - Make
create_bias
a public API? (#2049) - Make
rng_from_array
non-differentiable (#2062) @autosize
does not work with semi-colon separated kwargs (#2086)- early_stopping does not work as expected (#2089)
Merged pull requests:
- Documentation headings & sections (#2056) (@mcabbott)
- Add a dark mode version of logo (#2063) (@Saransh-cpp)
- Fix a few crossrefs + update Zygote's page (#2064) (@Saransh-cpp)
- Make
rng_from_array
non differentiable (#2065) (@Saransh-cpp) - Add an example to the readme? (#2067) (@mcabbott)
- Add a quick start example, and change some headings (#2069) (@mcabbott)
- Stop training on Inf/NaN loss (#2070) (@mcabbott)
- Export
Embedding
(#2072) (@mcognetta) - Relax
RNN
/LSTM
/GRUCell
internal matrix type restrictions (#2073) (@mcognetta) - Finish docs for #2073 (#2075) (@mcognetta)
- Add
@autosize
(#2078) (@mcabbott) - Back to create_bias (#2081) (@Saransh-cpp)
- Simplify
Embedding
(#2084) (@mcabbott) - Fix
|> gpu
bug in@autosize
(#2085) (@mcabbott) - Fix #2086 re
@autosize
(#2087) (@mcabbott) - Use the standard Documenter.jl local redirect (#2093) (@ChrisRackauckas)
- CompatHelper: bump compat for MLUtils to 0.3, (keep existing compat) (#2095) (@github-actions[bot])
v0.13.6
Flux v0.13.6
Closed issues:
- OneHotArrays.jl? (#1544)
- [Discussion]: doctests, docstrings, documentation manual, and unclear internal API (for newcomers) (#1990)
- [Bug]: Swapped
alpha
andbeta
intversky
loss? (#1993) - [Discussion]: documentation for
@reexport
ed andimport
ed (orusing
) packages (#2038) - Pull request #2007 causes Flux.params() calls to not get cached (#2040)
- v0.13.5 breaks Flux.train! on a custom type (#2045)
- Bounds erro for Flux.reset! in loss function (#2057)
Merged pull requests:
- Miscellaneous docstring additions and fixes (#1998) (@Saransh-cpp)
- Use muladd for LSTM cell matmuls (#2023) (@ToucheSir)
- using OneHotArrays (#2025) (@mcabbott)
- mark
stop
,skip
,@epochs
as deprecated (#2027) (@mcabbott) - Fix the last remaining 404 errors (#2035) (@Saransh-cpp)
- Add ability to filter
loadmodel!
recursion (#2041) (@darsnack) - Mark
track_stats=true
as deprecated (#2042) (@akahard2dj) - Better docs for reexported packages (#2046) (@Saransh-cpp)
- Typo in BatchNorm number of channels assertion (#2047) (@Marcovela)
- Add extra test for params (#2051) (@christiangnrd)
- Restore some private functions (#2052) (@ToucheSir)
- Make params non-differentiable (Closes #2040 & #2048) (#2054) (@christiangnrd)
- Leftover changes from #2046 (#2055) (@Saransh-cpp)
unthunk
in some rules (#2058) (@mcabbott)- Fix the failing CI build (#2059) (@christiangnrd)
v0.13.5
Flux v0.13.5
Closed issues:
- PINN loss doesn't converge to 0? (#1966)
- Simple chaining compatibility check (#2017)
- v0.12.10 => v0.13.4 breaks
Dropout
on CUDA (#2018) - Wrong rrule dispatch for Array constructor (#2033)
Merged pull requests:
- Get rid of documentation warnings and 404 pages (#1987) (@Saransh-cpp)
- use Functors 0.3 in Flux (#2007) (@mcabbott)
- Typo (#2020) (@trigaten)
- Add
NNlib.grid_sample
(#2022) (@scheidan) - Remove CTC loss (moved to NNlib) (#2024) (@mcabbott)
- Fix typo in docs (#2030) (@svilupp)
- fix array constructor rrule (#2034) (@chengchingwen)
v0.13.4
Flux v0.13.4
Closed issues:
- Repository: on the addition of loss/distance functions and other niceties to Flux (#826)
trainable
for BatchNorm stops parameters from being saved and loaded (#1027)- Non-descriptive arg in
Conv
: whyfilter
intead ofsize
? (#1212) - Ada or ADA (#1949)
- Make
gpu(::DataLoader)
work or error loudly if it doesn't (#1974) - Conversion error when loading a model with v0.13+ with BSON (#1984)
- GPU broadcasting error when using softmax on GPU (#1994)
- Error when using CUDA (#1997)
- type cannot been referred with structured model function (#2000)
- [Broken Documentation] Dense(1 => 1) (#2001)
Merged pull requests:
- Fix slight typos in
LayerNorm
docs (#1975) (@theabhirath) - Piratical errors for two mistakes (#1976) (@mcabbott)
- Show
using Flux
before BSON@load
(#1977) (@JeffFessler) - Update docstrings of
basic.jl
andconv.jl
(#1978) (@Saransh-cpp) - Added Common GPU Workflows in Docs (#1980) (@lfenzo)
PairwiseFusion
layer, take 2 (#1983) (@theabhirath)- deprecations.jl: depwarn -> Base.depwarn (#1985) (@skleinbo)
- Update docstrings in
upsample.jl
,recurrent.jl
, andnormalise.jl
(#1995) (@Saransh-cpp) - replace ADAM with Adam and its variants thereof (#1996) (@Karthik-d-k)
- Make
Dropout
docs a little more user friendly (#2014) (@theabhirath)
v0.13.3
Flux v0.13.3
Merged pull requests:
v0.13.2
Flux v0.13.2
Closed issues:
Merged pull requests:
- Unify
ecosystem.md
(#1923) (@Saransh-cpp) - Updated path to DiffImages.jl (#1964) (@arcAman07)
- Explain
stride≠1
case for SamePad (#1965) (@KronosTheLate) - fast sigmoid (#1968) (@oysteinsolheim)
- CompatHelper: bump compat for ArrayInterface to 6, (keep existing compat) (#1969) (@github-actions[bot])
v0.13.1
Flux v0.13.1
Closed issues:
- Batchnorm on GPU for Float64 values (#1897)
- Tag? (#1924)
- DataLoader causes scalar indexing on GPU in Flux v0.13.0 (regression) (#1935)
- Flux.flip with broadcasting warning (#1936)
- Add a workflow to clean-up
gh-pages
branch? (#1940) - DimensionMismatch: All data containers must have the same number of observations. (#1941)
- Type instability in Recur for 3 dimensional arrays (#1947)
- What is the idiomatic way to get training loss from
gradient()
? (#1950) - Dropout erroring on latest CUDA (#1960)
- AdaBelief issues (#1962)
Merged pull requests:
- Add a ton of doctests + fix outdated documentation in
.md
files (#1916) (@Saransh-cpp) - Get the DocBot up again! (#1937) (@Saransh-cpp)
- Broadcasting replaced with comprehension in the Flux.flip function. (#1938) (@fpartl)
- Fix type instabilities in apply!(optimizer, ...) (#1942) (@ancapdev)
- Add a workflow to delete PR previews (#1943) (@Saransh-cpp)
- Fix for progress logging to non-VS Code loggers (#1944) (@darsnack)
- Add Base.firstindex(c::Chain) = 1 (#1945) (@KronosTheLate)
- Recur type stability for 3d arrays (#1948) (@Marcovela)
- Resolve two warnings in the test suite (#1951) (@mcognetta)
- Update documentation on Split layer (#1953) (@JLDC)
- [docs] suggest using ADAM with LR=1 when combined with ExpDecay (#1955) (@ericphanson)
- Type stable
conv_reshape_bias
and AD-friendlyConvDims
helpers (#1956) (@ToucheSir) - onehotbatch with CuArray (#1959) (@CarloLucibello)
- AdaBelief bias correction (#1963) (@cossio)
v0.13.0
Flux v0.13.0
Changes in NEWS.md
Closed issues:
- DepthwiseConv does not run on GPU (#459)
- Flux type piracy breaks REPL completions (#629)
- Cannot do double iteration of DataLoader (#1227)
- elu activation fails on nested pullbacks on GPU (#1383)
- Training not working for 1D types (#1479)
- adjoint of conv adjoint. (#1665)
pullback
'sback
returns unexpected size if some parameters are not used (#1601)- Allow specification of RNG in Dropout (#1617)
- deprecate DepthwiseConv once we have groups in standard conv (#1667)
Parallel
edge-cases (#1685)- Layer printing interferes with different element types (#1690)
- Normalization Layers not interating well with destructure/restructure (#1727)
- missing docstring for
Flux.params
andtrainable
(#1732) - inconsistency between params and destructure (#1733)
- Parameter Sharing breaks
destructure
(#1767) - Remove Juno.jl dependency (#1779)
Flux.destructure
's restructure fails in the gradient if loss does not use all parameters (#1826)Flux.chunk
for multi-dimensional arrays (#1841)- onehotbatch performance (#1844)
- Issue taking gradients of Chains on GPU (#1853)
Chain
forgets names underfmap
(#1857)- Recurrent 3d interface uses a lot of memory (#1872)
- Gradient incorrect for Conv-layer and complex numbers (#1876)
- Add Siamese Contrastive Loss function (#1880)
- Urgent GSoC revisions are needed. (#1890)
- Flux v0.12.9 and the Flux.Tracker.gradient is wrong, why? (#1898)
- LoadError UnderVarError: flatten not defined (#1899)
- Proposal: Move
params
to Zygote (#1900) - This one is not in use, which one should I use instead in Flux? (#1903)
- ERROR: LoadError: Can't differentiate foreigncall expression (#1904)
- Missing docstring for
Flux.Data.Dataloader
(#1909) - Different
Julia
versions at different places for doctests (#1914) Parallel
layer behaves diffferently in aChain
than on its own (#1919)- ADAMW not stable (#1920)
- Chain ignores Base.show function of custom layer (#1929)
Merged pull requests:
- v0.13 deprecations (#1751) (@CarloLucibello)
- Print channel dimensions of
Dense
like those ofConv
(#1658) (@mcabbott) - Replace unrolled
foldl
used to evaluateChain
with a better one (#1809) (@mcabbott) - Zero is a real number (
Flux.Nil
) (#1830) (@mcabbott) - Use faster activation functions (#1837) (@mcabbott)
- Add RNG support for Dropout/AlphaDropout (#1849) (@darsnack)
- Fix CI to run on LTS + latest + nightly (#1852) (@darsnack)
- Fix type-stability for normalization layers (#1856) (@pxl-th)
- Use ProgressLogging instead of Juno (#1859) (@darsnack)
- Speed up
onehotbatch
(#1861) (@mcabbott) - Simplify
trainable
,functor
andParallel
(#1862) (@mcabbott) - Replace
@adjoint
withrrule
(#1863) (@mcabbott) - Depend on Optimisers.jl (#1864) (@mcabbott)
- rationalize CI (#1865) (@CarloLucibello)
- Updated Dropout for more input types. (#1867) (@ShoofLLC)
- fix adamw (#1868) (@CarloLucibello)
- Add OperatorLearning.jl to Flux downstream tests (#1869) (@ChrisRackauckas)
- Mark dropout_mask as non-differentiable (#1870) (@ToucheSir)
- Recurrent benchmarks (#1871) (@mkschleg)
- Changed view to eachslice for folding in recurrent (#1873) (@mkschleg)
- use MLUtils (#1874) (@CarloLucibello)
- Add a structural
loadparams!
(#1875) (@darsnack) - Truncated normal initialisation for weights (#1877) (@theabhirath)
- Extending
Diagonal
(#1881) (@theabhirath) - rm Flux.Zeros (#1882) (@mcabbott)
- CompatHelper: add new compat entry for SpecialFunctions at version 2, (keep existing compat) (#1883) (@github-actions[bot])
- Make RNN layers accept
in => out
(#1886) (@mcabbott) - Speeding up onehotbatch by creating OneHotArray directly (#1888) (@TLipede)
- CompatHelper: bump compat for MLUtils to 0.2, (keep existing compat) (#1889) (@github-actions[bot])
- Addition of Siamese Contrastive Loss function ( Updated ) (#1892) (@arcAman07)
- Buildkite: don't persist registry across runs (#1893) (@ToucheSir)
- Use
destructure
from Optimisers.jl (#1901) (@mcabbott) - RFC: Restrict
train!
toAbstractOptimiser
(#1902) (@mcabbott) - Add
dims
keywords to some tests (#1906) (@mcabbott) - Mark initialisations nograd, restrict signatures (#1908) (@mcabbott)
- Add
MLUtils
's docs and fix some missing docstrings (#1910) (@Saransh-cpp) - Improvements for LayerNorm (#1911) (@theabhirath)
- Improve docs for initialisation (#1912) (@mcabbott)
- Turn off doctests while building docs (#1915) (@Saransh-cpp)
- dampening -> damping (#1918) (@alhirzel)
- remove DepthwiseConv type in favor of Conv (#1921) (@CarloLucibello)
- Allow activation function for Diagonal (#1925) (@theabhirath)
- Upgrade warnings for v0.13 (#1926) (@mcabbott)
- Rename
Diagonal
toScale
(#1927) (@mcabbott) - Fix a code block (#1933) (@prbzrg)