82,200 questions
0
votes
0
answers
33
views
Apply Quantization on a CNN
I want to apply a quantization function to a deep CNN. This CNN is used for an image classification(in 4 classes) task, and my data consists of 224×224 images. When I run this code, I get an error. ...
0
votes
0
answers
59
views
TensorFlow + RTX 5090 + WSL: CUDA 12 Installed in WSL but Windows Driver Uses CUDA 13 [closed]
I’m trying to run TensorFlow with GPU on Windows 11 + WSL2 using an NVIDIA RTX 5090.
The issue
TensorFlow currently supports:
CUDA 12.3
cuDNN 9.x
Requires NVIDIA driver ≥ 560.94
But RTX 50-series GPUs ...
-1
votes
0
answers
29
views
Low accuracy in sleep stage classification model — need help improving performance
I’m working on a sleep stage classification model using deep learning, but I’m getting much lower accuracy than expected. I would appreciate help reviewing my code and suggestions for improving the ...
0
votes
0
answers
24
views
Flutter + LiteRT 1.4.1 – FlexSplitV / Flex delegate still not applied even after adding tensorflow-lite-select-tf-ops dependency
I’m building a Flutter Android app that performs real-time people counting from an RTSP camera stream (IP camera) using a YOLOv8n model exported to TensorFlow Lite.
Model details
- YOLOv8n trained in ...
-1
votes
1
answer
36
views
Get image paths from tfds food101
I'm working on food101 tensorflow dataset and want to know the most wrong predictions of my efficientnet model, for that purpose I'd need to get image paths of test data, but I don't know how I can ...
0
votes
0
answers
37
views
AttributeError: module 'tensorflow' has no attribute 'saved_model' when importing gemma3 from ai_edge_torch in Google Colab
I'm trying to use Google's ai-edge-torch library to work with the Gemma 3 model in a Google Colab environment. However, I encounter the following error during import:
import torch
from ai_edge_torch....
-1
votes
0
answers
46
views
Cannot allocate memory in static TLS block when import albumentations and tensorflow [duplicate]
I using a python 3.10 virtual environment, and try to import tensorflow as tf and import albumentations as A. And i faced a error on cannot allocate memory in static TLS block. Anyone know what is the ...
0
votes
0
answers
27
views
How To Work With Windowsize Inputs That Don't Match The Pooling?
Let's take a minimal 1d cnn autoencoder model as an example:
def cnn_session_encoder(inputs):
x = layers.Conv1D(32, 3, activation="relu", padding="same")(x)
x = layers....
1
vote
1
answer
57
views
ValueError: Can't convert non-rectangular Python sequence to Tensor in text-classification problem
I am building a text classification system which requires a large preprocessing and training script. The script reads variable-length token sequences and attempts to build a tf.data.Dataset using ...
0
votes
1
answer
65
views
EfficientNet-B7 Shape mismatch
working on project of image classification using efficientnet-B7 what is wrong in this code? why is the error showing when i run this line of code? the error state
Shape mismatch in layer #1 (named ...
3
votes
1
answer
53
views
How to fix ValueError: Only instances of keras.Layer can be added to a Sequential model when adding tensorflow_hub.KerasLayer?
I am learning TensorFlow and transfer learning, and I am trying to add a TensorFlow Hub feature extractor to a Keras Sequential model. But I get this error:
ValueError: Only instances of keras.Layer ...
0
votes
0
answers
46
views
Why is the wall time from the full trace different from the timer in the compute function?
In my custom operator(runs on cpu), I use butil::Timer to measure the time taken as shown below:
void Compute(OpKernelContext* ctx) override {
butil::Timer total_timer;
total_timer.start();
...
1
vote
1
answer
77
views
TPU Initialization Fails (OpKernel Missing) Despite Active TPU Runtime in Kaggle
I am facing a persistent issue when trying to initialize the TPU in my notebook. I have already confirmed that:
My account is Verified.
The Notebook Accelerator is set to TPU.
My TPU quota is ...
1
vote
0
answers
48
views
Compatibility between streamlit and protobuf
I am unable to use the print(tf.version.VERSION) to check the tensorflow version. Reason being tensorflow looks for runtime_version in protobuf (from what I have learnt) and that is only supported in ...
2
votes
0
answers
76
views
Issue Replicating TF-Lite Conv2D Quantized Inference Output
I am trying to reproduce the exact layer-wise output of a quantized EfficientNet model (TFLite model, TensorFlow 2.17) by re-implementing Conv2D, DepthwiseConv2D, FullyConnected, Add, Mul, Sub and ...