Data augmentation

View on TensorFlow.org View source on GitHub Download notebook

Overview

This tutorial demonstrates manual image manipulations and augmentation using tf.image.

Data augmentation is a common technique to improve results and avoid overfitting, see Overfitting and Underfitting for others.

Setup

pip install -q git+https://github.com/tensorflow/docs
import urllib

import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import layers
AUTOTUNE = tf.data.experimental.AUTOTUNE

import tensorflow_docs as tfdocs
import tensorflow_docs.plots

import tensorflow_datasets as tfds

import PIL.Image

import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12, 5)

import numpy as np

Let's check the data augmentation features on an image and then augment a whole dataset later to train a model.

Download this image, by Von.grzanka, for augmentation.

image_path = tf.keras.utils.get_file("cat.jpg", "https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg")
PIL.Image.open(image_path)
Downloading data from https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg
24576/17858 [=========================================] - 0s 0us/step

png

Read and decode the image to tensor format.

image_string=tf.io.read_file(image_path)
image=tf.image.decode_jpeg(image_string,channels=3)

A function to visualize and compare the original and augmented image side by side.

def visualize(original, augmented):
  fig = plt.figure()
  plt.subplot(1,2,1)
  plt.title('Original image')
  plt.imshow(original)

  plt.subplot(1,2,2)
  plt.title('Augmented image')
  plt.imshow(augmented)

Augment a single image

Flipping the image

Flip the image either vertically or horizontally.

flipped = tf.image.flip_left_right(image)
visualize(image, flipped)

png

Grayscale the image

Grayscale an image.

grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
plt.colorbar()
<matplotlib.colorbar.Colorbar at 0x7f1e74047f98>

png

Saturate the image

Saturate an image by providing a saturation factor.

saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)

png

Change image brightness

Change the brightness of image by providing a brightness factor.

bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)

png

Rotate the image

Rotate an image by 90 degrees.

rotated = tf.image.rot90(image)
visualize(image, rotated)

png

Center crop the image

Crop the image from center upto the image part you desire.

cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image,cropped)

png

See the tf.image reference for details about available augmentation options.

Augment a dataset and train a model with it

Train a model on an augmented dataset.

dataset, info =  tfds.load('mnist', as_supervised=True, with_info=True)
train_dataset, test_dataset = dataset['train'], dataset['test']

num_train_examples= info.splits['train'].num_examples

Write a function to augment the images. Map it over the the dataset. This returns a dataset that augments the data on the fly.

def convert(image, label):
  image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
  return image, label

def augment(image,label):
  image,label = convert(image, label)
  image = tf.image.convert_image_dtype(image, tf.float32) # Cast and normalize the image to [0,1]
  image = tf.image.resize_with_crop_or_pad(image, 34, 34) # Add 6 pixels of padding
  image = tf.image.random_crop(image, size=[28, 28, 1]) # Random crop back to 28x28
  image = tf.image.random_brightness(image, max_delta=0.5) # Random brightness

  return image,label
BATCH_SIZE = 64
# Only use a subset of the data so it's easier to overfit, for this tutorial
NUM_EXAMPLES = 2048

Create the augmented dataset.

augmented_train_batches = (
    train_dataset
    # Only train on a subset, so you can quickly see the effect.
    .take(NUM_EXAMPLES)
    .cache()
    .shuffle(num_train_examples//4)
    # The augmentation is added here.
    .map(augment, num_parallel_calls=AUTOTUNE)
    .batch(BATCH_SIZE)
    .prefetch(AUTOTUNE)
) 

And a non-augmented one for comparison.

non_augmented_train_batches = (
    train_dataset
    # Only train on a subset, so you can quickly see the effect.
    .take(NUM_EXAMPLES)
    .cache()
    .shuffle(num_train_examples//4)
    # No augmentation.
    .map(convert, num_parallel_calls=AUTOTUNE)
    .batch(BATCH_SIZE)
    .prefetch(AUTOTUNE)
) 

Setup the validation dataset. This doesn't change whether or not you're using the augmentation.

validation_batches = (
    test_dataset
    .map(convert, num_parallel_calls=AUTOTUNE)
    .batch(2*BATCH_SIZE)
)

Create and compile the model. The model is a two layered, fully-connected neural network without convolution.

def make_model():
  model = tf.keras.Sequential([
      layers.Flatten(input_shape=(28, 28, 1)),
      layers.Dense(4096, activation='relu'),
      layers.Dense(4096, activation='relu'),
      layers.Dense(10)
  ])
  model.compile(optimizer = 'adam',
                loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])
  return model

Train the model, without augmentation:

model_without_aug = make_model()

no_aug_history = model_without_aug.fit(non_augmented_train_batches, epochs=50, validation_data=validation_batches)
Epoch 1/50
32/32 [==============================] - 1s 25ms/step - loss: 0.7789 - accuracy: 0.7539 - val_loss: 0.3646 - val_accuracy: 0.8893
Epoch 2/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1741 - accuracy: 0.9487 - val_loss: 0.2963 - val_accuracy: 0.9112
Epoch 3/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0869 - accuracy: 0.9683 - val_loss: 0.2924 - val_accuracy: 0.9176
Epoch 4/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0424 - accuracy: 0.9888 - val_loss: 0.3064 - val_accuracy: 0.9249
Epoch 5/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0298 - accuracy: 0.9927 - val_loss: 0.3532 - val_accuracy: 0.9185
Epoch 6/50
32/32 [==============================] - 0s 11ms/step - loss: 0.0244 - accuracy: 0.9937 - val_loss: 0.4217 - val_accuracy: 0.9126
Epoch 7/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0510 - accuracy: 0.9844 - val_loss: 0.3516 - val_accuracy: 0.9248
Epoch 8/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0267 - accuracy: 0.9893 - val_loss: 0.3228 - val_accuracy: 0.9299
Epoch 9/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0306 - accuracy: 0.9897 - val_loss: 0.3541 - val_accuracy: 0.9243
Epoch 10/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0294 - accuracy: 0.9922 - val_loss: 0.3797 - val_accuracy: 0.9243
Epoch 11/50
32/32 [==============================] - 0s 11ms/step - loss: 0.0542 - accuracy: 0.9829 - val_loss: 0.3582 - val_accuracy: 0.9243
Epoch 12/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0766 - accuracy: 0.9766 - val_loss: 0.3774 - val_accuracy: 0.9172
Epoch 13/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0365 - accuracy: 0.9863 - val_loss: 0.3802 - val_accuracy: 0.9250
Epoch 14/50
32/32 [==============================] - 0s 11ms/step - loss: 0.0270 - accuracy: 0.9937 - val_loss: 0.3901 - val_accuracy: 0.9253
Epoch 15/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0082 - accuracy: 0.9985 - val_loss: 0.3722 - val_accuracy: 0.9321
Epoch 16/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0019 - accuracy: 0.9995 - val_loss: 0.3596 - val_accuracy: 0.9361
Epoch 17/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0035 - accuracy: 0.9990 - val_loss: 0.3825 - val_accuracy: 0.9316
Epoch 18/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0028 - accuracy: 0.9990 - val_loss: 0.4031 - val_accuracy: 0.9286
Epoch 19/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0038 - accuracy: 0.9985 - val_loss: 0.3796 - val_accuracy: 0.9351
Epoch 20/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0117 - accuracy: 0.9985 - val_loss: 0.3837 - val_accuracy: 0.9314
Epoch 21/50
32/32 [==============================] - 0s 11ms/step - loss: 0.0046 - accuracy: 0.9985 - val_loss: 0.4248 - val_accuracy: 0.9279
Epoch 22/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0446 - accuracy: 0.9878 - val_loss: 0.5134 - val_accuracy: 0.9099
Epoch 23/50
32/32 [==============================] - 0s 11ms/step - loss: 0.0639 - accuracy: 0.9844 - val_loss: 0.4277 - val_accuracy: 0.9156
Epoch 24/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0433 - accuracy: 0.9902 - val_loss: 0.4310 - val_accuracy: 0.9227
Epoch 25/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0345 - accuracy: 0.9912 - val_loss: 0.3614 - val_accuracy: 0.9287
Epoch 26/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0128 - accuracy: 0.9961 - val_loss: 0.4247 - val_accuracy: 0.9232
Epoch 27/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0085 - accuracy: 0.9971 - val_loss: 0.3709 - val_accuracy: 0.9326
Epoch 28/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0028 - accuracy: 0.9985 - val_loss: 0.3633 - val_accuracy: 0.9340
Epoch 29/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0085 - accuracy: 0.9966 - val_loss: 0.4078 - val_accuracy: 0.9312
Epoch 30/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0071 - accuracy: 0.9971 - val_loss: 0.4586 - val_accuracy: 0.9265
Epoch 31/50
32/32 [==============================] - 0s 11ms/step - loss: 0.0038 - accuracy: 0.9985 - val_loss: 0.4396 - val_accuracy: 0.9303
Epoch 32/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0011 - accuracy: 0.9995 - val_loss: 0.4794 - val_accuracy: 0.9298
Epoch 33/50
32/32 [==============================] - 0s 11ms/step - loss: 0.0105 - accuracy: 0.9966 - val_loss: 0.5013 - val_accuracy: 0.9245
Epoch 34/50
32/32 [==============================] - 0s 11ms/step - loss: 0.0129 - accuracy: 0.9961 - val_loss: 0.4701 - val_accuracy: 0.9297
Epoch 35/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0088 - accuracy: 0.9966 - val_loss: 0.4248 - val_accuracy: 0.9358
Epoch 36/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0030 - accuracy: 0.9990 - val_loss: 0.5542 - val_accuracy: 0.9223
Epoch 37/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0148 - accuracy: 0.9966 - val_loss: 0.5166 - val_accuracy: 0.9228
Epoch 38/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0304 - accuracy: 0.9912 - val_loss: 0.5646 - val_accuracy: 0.9206
Epoch 39/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1095 - accuracy: 0.9727 - val_loss: 0.4959 - val_accuracy: 0.9224
Epoch 40/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0449 - accuracy: 0.9868 - val_loss: 0.5200 - val_accuracy: 0.9192
Epoch 41/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0394 - accuracy: 0.9868 - val_loss: 0.5258 - val_accuracy: 0.9279
Epoch 42/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0332 - accuracy: 0.9912 - val_loss: 0.5543 - val_accuracy: 0.9242
Epoch 43/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0245 - accuracy: 0.9922 - val_loss: 0.5046 - val_accuracy: 0.9274
Epoch 44/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0182 - accuracy: 0.9946 - val_loss: 0.4766 - val_accuracy: 0.9285
Epoch 45/50
32/32 [==============================] - 0s 11ms/step - loss: 0.0150 - accuracy: 0.9971 - val_loss: 0.4620 - val_accuracy: 0.9291
Epoch 46/50
32/32 [==============================] - 0s 11ms/step - loss: 0.0214 - accuracy: 0.9971 - val_loss: 0.5055 - val_accuracy: 0.9245
Epoch 47/50
32/32 [==============================] - 0s 11ms/step - loss: 0.0033 - accuracy: 0.9995 - val_loss: 0.5357 - val_accuracy: 0.9267
Epoch 48/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0058 - accuracy: 0.9980 - val_loss: 0.5202 - val_accuracy: 0.9292
Epoch 49/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0137 - accuracy: 0.9961 - val_loss: 0.5277 - val_accuracy: 0.9279
Epoch 50/50
32/32 [==============================] - 0s 12ms/step - loss: 0.0127 - accuracy: 0.9966 - val_loss: 0.6834 - val_accuracy: 0.9160

Train it again with augmentation:

model_with_aug = make_model()

aug_history = model_with_aug.fit(augmented_train_batches, epochs=50, validation_data=validation_batches)
Epoch 1/50
32/32 [==============================] - 0s 14ms/step - loss: 2.4468 - accuracy: 0.3008 - val_loss: 1.1565 - val_accuracy: 0.6538
Epoch 2/50
32/32 [==============================] - 0s 12ms/step - loss: 1.4342 - accuracy: 0.5215 - val_loss: 0.8370 - val_accuracy: 0.7640
Epoch 3/50
32/32 [==============================] - 0s 12ms/step - loss: 0.9822 - accuracy: 0.6685 - val_loss: 0.5018 - val_accuracy: 0.8640
Epoch 4/50
32/32 [==============================] - 0s 12ms/step - loss: 0.7949 - accuracy: 0.7368 - val_loss: 0.4633 - val_accuracy: 0.8400
Epoch 5/50
32/32 [==============================] - 0s 12ms/step - loss: 0.7016 - accuracy: 0.7632 - val_loss: 0.3276 - val_accuracy: 0.9016
Epoch 6/50
32/32 [==============================] - 0s 12ms/step - loss: 0.5931 - accuracy: 0.8013 - val_loss: 0.2918 - val_accuracy: 0.9147
Epoch 7/50
32/32 [==============================] - 0s 12ms/step - loss: 0.5906 - accuracy: 0.8110 - val_loss: 0.2945 - val_accuracy: 0.9082
Epoch 8/50
32/32 [==============================] - 0s 12ms/step - loss: 0.4932 - accuracy: 0.8394 - val_loss: 0.2709 - val_accuracy: 0.9170
Epoch 9/50
32/32 [==============================] - 0s 12ms/step - loss: 0.4466 - accuracy: 0.8477 - val_loss: 0.2656 - val_accuracy: 0.9159
Epoch 10/50
32/32 [==============================] - 0s 12ms/step - loss: 0.4413 - accuracy: 0.8555 - val_loss: 0.2312 - val_accuracy: 0.9278
Epoch 11/50
32/32 [==============================] - 0s 12ms/step - loss: 0.4132 - accuracy: 0.8672 - val_loss: 0.2322 - val_accuracy: 0.9252
Epoch 12/50
32/32 [==============================] - 0s 12ms/step - loss: 0.3857 - accuracy: 0.8726 - val_loss: 0.2213 - val_accuracy: 0.9315
Epoch 13/50
32/32 [==============================] - 0s 12ms/step - loss: 0.3839 - accuracy: 0.8643 - val_loss: 0.1976 - val_accuracy: 0.9417
Epoch 14/50
32/32 [==============================] - 0s 12ms/step - loss: 0.3227 - accuracy: 0.8921 - val_loss: 0.2298 - val_accuracy: 0.9285
Epoch 15/50
32/32 [==============================] - 0s 12ms/step - loss: 0.3190 - accuracy: 0.8921 - val_loss: 0.2198 - val_accuracy: 0.9299
Epoch 16/50
32/32 [==============================] - 0s 12ms/step - loss: 0.3180 - accuracy: 0.8994 - val_loss: 0.1837 - val_accuracy: 0.9410
Epoch 17/50
32/32 [==============================] - 0s 12ms/step - loss: 0.3038 - accuracy: 0.8975 - val_loss: 0.1786 - val_accuracy: 0.9434
Epoch 18/50
32/32 [==============================] - 0s 11ms/step - loss: 0.2998 - accuracy: 0.9102 - val_loss: 0.1846 - val_accuracy: 0.9399
Epoch 19/50
32/32 [==============================] - 0s 12ms/step - loss: 0.3275 - accuracy: 0.8853 - val_loss: 0.1893 - val_accuracy: 0.9384
Epoch 20/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2921 - accuracy: 0.9058 - val_loss: 0.1737 - val_accuracy: 0.9453
Epoch 21/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2774 - accuracy: 0.9106 - val_loss: 0.1802 - val_accuracy: 0.9462
Epoch 22/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2592 - accuracy: 0.9146 - val_loss: 0.1727 - val_accuracy: 0.9443
Epoch 23/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2786 - accuracy: 0.9111 - val_loss: 0.1727 - val_accuracy: 0.9461
Epoch 24/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2666 - accuracy: 0.9023 - val_loss: 0.2132 - val_accuracy: 0.9311
Epoch 25/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2584 - accuracy: 0.9155 - val_loss: 0.1858 - val_accuracy: 0.9423
Epoch 26/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2370 - accuracy: 0.9224 - val_loss: 0.1806 - val_accuracy: 0.9432
Epoch 27/50
32/32 [==============================] - 0s 11ms/step - loss: 0.2407 - accuracy: 0.9170 - val_loss: 0.1551 - val_accuracy: 0.9529
Epoch 28/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2189 - accuracy: 0.9268 - val_loss: 0.1605 - val_accuracy: 0.9503
Epoch 29/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2267 - accuracy: 0.9302 - val_loss: 0.1647 - val_accuracy: 0.9487
Epoch 30/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2507 - accuracy: 0.9204 - val_loss: 0.1627 - val_accuracy: 0.9505
Epoch 31/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2318 - accuracy: 0.9243 - val_loss: 0.1527 - val_accuracy: 0.9506
Epoch 32/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2081 - accuracy: 0.9224 - val_loss: 0.1817 - val_accuracy: 0.9431
Epoch 33/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1809 - accuracy: 0.9375 - val_loss: 0.1646 - val_accuracy: 0.9492
Epoch 34/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1702 - accuracy: 0.9409 - val_loss: 0.2058 - val_accuracy: 0.9375
Epoch 35/50
32/32 [==============================] - 0s 11ms/step - loss: 0.2220 - accuracy: 0.9199 - val_loss: 0.1597 - val_accuracy: 0.9508
Epoch 36/50
32/32 [==============================] - 0s 11ms/step - loss: 0.2151 - accuracy: 0.9258 - val_loss: 0.1847 - val_accuracy: 0.9467
Epoch 37/50
32/32 [==============================] - 0s 11ms/step - loss: 0.2062 - accuracy: 0.9341 - val_loss: 0.1764 - val_accuracy: 0.9468
Epoch 38/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1863 - accuracy: 0.9395 - val_loss: 0.1513 - val_accuracy: 0.9521
Epoch 39/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1868 - accuracy: 0.9419 - val_loss: 0.1648 - val_accuracy: 0.9518
Epoch 40/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1740 - accuracy: 0.9380 - val_loss: 0.1521 - val_accuracy: 0.9547
Epoch 41/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1612 - accuracy: 0.9482 - val_loss: 0.1490 - val_accuracy: 0.9532
Epoch 42/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1881 - accuracy: 0.9355 - val_loss: 0.1552 - val_accuracy: 0.9508
Epoch 43/50
32/32 [==============================] - 0s 12ms/step - loss: 0.2043 - accuracy: 0.9380 - val_loss: 0.1596 - val_accuracy: 0.9534
Epoch 44/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1866 - accuracy: 0.9365 - val_loss: 0.1705 - val_accuracy: 0.9473
Epoch 45/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1582 - accuracy: 0.9458 - val_loss: 0.1434 - val_accuracy: 0.9554
Epoch 46/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1295 - accuracy: 0.9580 - val_loss: 0.1613 - val_accuracy: 0.9518
Epoch 47/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1421 - accuracy: 0.9546 - val_loss: 0.1455 - val_accuracy: 0.9570
Epoch 48/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1539 - accuracy: 0.9541 - val_loss: 0.1660 - val_accuracy: 0.9526
Epoch 49/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1652 - accuracy: 0.9478 - val_loss: 0.1470 - val_accuracy: 0.9526
Epoch 50/50
32/32 [==============================] - 0s 12ms/step - loss: 0.1645 - accuracy: 0.9443 - val_loss: 0.1540 - val_accuracy: 0.9546

Conclusion:

In this example the augmented model converges to an accuracy ~95% on validation set. This is slightly higher (+1%) than the model trained without data augmentation.

plotter = tfdocs.plots.HistoryPlotter()
plotter.plot({"Augmented": aug_history, "Non-Augmented": no_aug_history}, metric = "accuracy")
plt.title("Accuracy")
plt.ylim([0.75,1])
(0.75, 1.0)

png

In terms of loss, the non-augmented model is obviously in the overfitting regime. The augmented model, while a few epoch slower, is still training correctly and clearly not overfitting.

plotter = tfdocs.plots.HistoryPlotter()
plotter.plot({"Augmented": aug_history, "Non-Augmented": no_aug_history}, metric = "loss")
plt.title("Loss")
plt.ylim([0,1])
(0.0, 1.0)

png