It's a little bit unclear to me whether I get a list of available or installed voices on the device.
If an app requests a voice which is available but not installed, what happened?
Is it upon the app to install the missing voice or is this done by iOS automatically?
For some reasons, a male and female gender is not offered for each language. What's the reason for?
General
RSS for tagExplore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.
Post
Replies
Boosts
Views
Activity
in iOS 15, on stopSpeaking of AVSpeechSynthesizer,
didFinish delegate method getting called instead of didCancel which is working fine in iOS 14 and below version.
Hi,
I installed skearn successfully and ran the MINIST toy example successfully.
then I started to run my project. The finning thing everything seems good at the start point (at least no ImportError occurs). but when I made some changes of my code and try to run all cells (I use jupyter lab) again, ImportError occurs.....
ImportError: dlopen(/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/qhull.cpython-39-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
Referenced from: /Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/qhull.cpython-39-darwin.so
Reason: tried: '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/../../../../liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/python3.9/site-packages/scipy/spatial/../../../../liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/lib/liblapack.3.dylib' (no such file), '/Users/a/miniforge3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file)
then I have to uninstall scipy, sklearn, etc and reinstall all of them. and my code can be ran again.....
Magically I hate to say, anyone knows how to permanently solve this problem? make skearn more stable?
Hi everyone,
I found that the performance of GPU is not good as I expected (as slow as a turtle), I wanna switch from GPU to CPU. but mlcompute module cannot be found, so wired.
The same code ran on colab and my computer (jupyter lab) take 156s vs 40 minutes per epoch, respectively.
I only used a small dataset (a few thousands of data points), and each epoch only have 20 baches.
I am so disappointing and it seems like the "powerful" GPU is a joke.
I am using 12.0.1 macOS and the version of tensorflow-macos is 2.6.0
Can anyone tell me why this happens?
I am using NLTagger to tag lexical classes of words, but it suddenly just stopped working. I boiled my code down to the most basic version, but it's never executing the closure of the enumerateTags() function. What do I have to change or what should I try?
for e in sentenceArray {
let cupcake = "I like you, have a cupcake"
tagger.string = cupcake
tagger.enumerateTags(in: cupcake.startIndex..<cupcake.endIndex, unit: .word, scheme: .nameTypeOrLexicalClass) { tag, range in
print("TAG")
return true
}
I am comparing my M1 MBA with my 2019 16" Intel MBP. The M1 MBA has tensorflow-metal, while the Intel MBP has TF directly from Google.
Generally, the same programs runs 2-5 times FASTER on the Intel MBP, which presumably has no GPU acceleration.
Is there anything I could have done wrong on the M1?
Here is the start of the metal run:
Metal device set to: Apple M1
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
2022-01-19 04:43:50.975025: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-01-19 04:43:50.975291: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
2022-01-19 04:43:51.216306: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
Epoch 1/10
2022-01-19 04:43:51.298428: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
Im using my 2020 Mac mini with M1 chip and this is the first time try to use it on convolutional neural network training.
So the problem is I install the python(ver 3.8.12) using miniforge3 and Tensorflow following this instruction. But still facing the GPU problem when training a 3D Unet.
Here's part of my code and hoping to receive some suggestion to fix this.
import tensorflow as tf
from tensorflow import keras
import json
import numpy as np
import pandas as pd
import nibabel as nib
import matplotlib.pyplot as plt
from tensorflow.keras import backend as K
#check available devices
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
print(get_available_devices())
Metal device set to: Apple M1
['/device:CPU:0', '/device:GPU:0']
2022-02-09 11:52:55.468198: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-02-09 11:52:55.468885: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
X_norm_with_batch_dimension = np.expand_dims(X_norm, axis=0)
#tf.device('/device:GPU:0') #Have tried this line doesn't work
#tf.debugging.set_log_device_placement(True) #Have tried this line doesn't work
patch_pred = model.predict(X_norm_with_batch_dimension)
InvalidArgumentError: 2 root error(s) found.
(0) INVALID_ARGUMENT: CPU implementation of Conv3D currently only supports the NHWC tensor format.
[[node model/conv3d/Conv3D
(defined at /Users/mwshay/miniforge3/envs/tensor/lib/python3.8/site-packages/keras/layers/convolutional.py:231)
]] [[model/conv3d/Conv3D/_4]]
(1) INVALID_ARGUMENT: CPU implementation of Conv3D currently only supports the NHWC tensor format.
[[node model/conv3d/Conv3D
(defined at /Users/mwshay/miniforge3/envs/tensor/lib/python3.8/site-packages/keras/layers/convolutional.py:231) ]]
0 successful operations.
0 derived errors ignored.
The code is executable on Google Colab but can't run on Mac mini locally with Jupyter notebook. The NHWC tensor format problem might indicate that Im using my CPU to execute the code instead of GPU.
Is there anyway to optimise GPU to train the network in Tensorflow?
I am aware this question has been asked before, but resolutions have worked for me. When I try to import TensorFlow on my python 3.9 environment I get the following error:
uwewinter@Uwes-MBP % python3
Python 3.9.10 (main, Jan 15 2022, 11:40:53)
[Clang 13.0.0 (clang-1300.0.29.3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
2022-02-09 21:30:01.701794: F tensorflow/c/experimental/stream_executor/stream_executor.cc:808] Non-OK-status: stream_executor::MultiPlatformManager::RegisterPlatform( std::move(cplatform)) status: INTERNAL: platform is already registered with name: "METAL"
zsh: abort python3
I have the newest versions of TensorFlow-macos and TensorFlow-metal installed:
uwewinter@Uwes-MBP % pip3 list | grep tensorflow
tensorflow-estimator 2.7.0
tensorflow-macos 2.7.0
tensorflow-metal 0.3.0
OSX is latest:
uwewinter@Uwes-MBP % sw_vers
ProductName: macOS
ProductVersion: 12.2
BuildVersion: 21D49
Mac is a 2021 MBP
uwewinter@Uwes-MBP % sysctl hw.model
hw.model: MacBookPro18,3
We run into an issue that a more complex model fails to converge on M1 Max GPU while it converges on its CPU and on Non-M1 based models.
the performance is the same for CPU and GPU for models with single RNN but once we use two RNNs GPU fails to converge.
That said, the below example is based on non-sensical data for the model architecture used. but we can observe here the same behavior as the one we observe in our production models (which for obvious reasons we cannot share here). Mainly:
the loss goes down to the bottom of the e-06 precision in all cases but when we use two RNNs on GPU. during training we often test e-07 precision level
for double RNN with GPU condition, the results do not go that low sometimes reaching also e-05 value level.
for our production data we see that double RNN with GPU results in loss of 1.0 and basically stays the same from the first epoch; but for the other conditions it often reaches 0.2 level with clear learning curve.
in production model increasing the LSTM_Cell number made the divergence more visible (in this syntactic date it does not happen)
the more complex the model is (after the RNN layers) the more visible the issue.
Suspected issues:
different precision used in CPU and GPU training - we had to decrease the data values a lot to make the effect visible ( if you work with raw data all approaches seem to produce the comparable results)
somehow the vanishing gradient problem is more pronounced on GPU as indicated by worse performance as the complexity of the model increases.
please let me know if you need any further details
Software Stack:
Mac OS 12.1 tf 2.7 metal 0.3
also tested on tf. 2.8
Sample Syntax:
TEST CONDITIONS:
#conditions with issue: 1,2
gpu = 1 # 0 CPU, 1 GPU
model_size = 2 # 1 single RNN, 2 double RNN
#PARAMETERS
LSTM_Cells = 64
epochs = 300
batch = 128
import numpy as np
import pandas as pd
import sys
from sklearn import preprocessing
#"""
if 'tensorflow' in sys.modules:
print("tensorflow uploaded")
del sys.modules["tensorflow"]
#del tf
import tensorflow as tf
else:
print("tensorflow not uploaded")
import tensorflow as tf
if gpu == 1:
pass
else:
tf.config.set_visible_devices([], 'GPU')
#print("GPUs:", tf.config.list_physical_devices('GPU'))
print("GPUs:", tf.config.list_logical_devices('GPU'))
#print("CPUs:", tf.config.list_physical_devices('CPU'))
print("CPUs:", tf.config.list_logical_devices('CPU'))
#"""
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Displacement', 'Horsepower', 'Weight']
dataset = pd.read_csv(url, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True).dropna()
scaler = preprocessing.StandardScaler().fit(dataset)
X_scaled = scaler.transform(dataset)
X_scaled = X_scaled * 0.001
Large Values
#x_train = np.array(dataset[['Horsepower', 'Weight']]).reshape(-1,2,2)
#y_train = np.array(dataset[['MPG','Displacement']]).reshape(-1,2,2)
Small Values
x_train = np.array(X_scaled[:,2:]).reshape(-1,2,2)
y_train = np.array(X_scaled[:,:2]).reshape(-1,2,2)
#print(dataset)
print(x_train.shape)
print(y_train.shape)
print(weight.shape)
train_data = tf.data.Dataset.from_tensor_slices((x_train[:,:,:8], y_train)).cache().shuffle(x_train.shape[0]).batch(batch).repeat().prefetch(tf.data.experimental.AUTOTUNE)
if model_size == 2:
#""" # MINIMAL NOT WORKING
encoder_inputs = tf.keras.Input(shape=(x_train.shape[1],x_train.shape[2]))
encoder_l1 = tf.keras.layers.LSTM(LSTM_Cells,return_sequences = True, return_state=True)
encoder_l1_outputs = encoder_l1(encoder_inputs)
encoder_l2 = tf.keras.layers.LSTM(LSTM_Cells, return_state=True)
encoder_l2_outputs = encoder_l2(encoder_l1_outputs[0])
dense_1 = tf.keras.layers.Dense(128, activation='relu')(encoder_l2_outputs[0])
dense_2 = tf.keras.layers.Dense(64, activation='relu')(dense_1)
dense_3 = tf.keras.layers.Dense(32, activation='relu')(dense_2)
dense_4 = tf.keras.layers.Dense(16, activation='relu')(dense_3)
flat = tf.keras.layers.Flatten()(dense_2)
dense_5 = tf.keras.layers.Dense(22)(flat)
reshape_output = tf.keras.layers.Reshape([2,2])(dense_5)
model = tf.keras.models.Model(encoder_inputs, reshape_output)
#"""
else:
#""" # WORKING
encoder_inputs = tf.keras.Input(shape=(x_train.shape[1],x_train.shape[2]))
encoder_l1 = tf.keras.layers.LSTM(LSTM_Cells,return_sequences = True, return_state=True)
encoder_l1_outputs = encoder_l1(encoder_inputs)
dense_1 = tf.keras.layers.Dense(128, activation='relu')(encoder_l1_outputs[0])
dense_2 = tf.keras.layers.Dense(64, activation='relu')(dense_1)
dense_3 = tf.keras.layers.Dense(32, activation='relu')(dense_2)
dense_4 = tf.keras.layers.Dense(16, activation='relu')(dense_3)
flat = tf.keras.layers.Flatten()(dense_2)
dense_5 = tf.keras.layers.Dense(22)(flat)
reshape_output = tf.keras.layers.Reshape([2,2])(dense_5)
model = tf.keras.models.Model(encoder_inputs, reshape_output)
#"""
print(model.summary())
loss_tf = tf.keras.losses.MeanSquaredError()
model.compile(optimizer='adam', loss=loss_tf, run_eagerly=True)
model.fit(train_data,
epochs = epochs,
steps_per_epoch = 3)
I am using SFSpeechRecognizer to perform speech recognition, but I am getting the following error.
[SpeechFramework] -[SFSpeechRecognitionTask localSpeechRecognitionClient:speechRecordingDidFail:]_block_invoke Ignoring subsequent local speech recording error: Error Domain=kAFAssistantErrorDomain Code=1101 "(null)"
Setting requiresOnDeviceRecognition to False works correctly, but previously it worked with True with no error.
The value of supportsOnDeviceRecognition was True, so the device is recognizing that it supports speech recognition.
iPad Pro 11inch iOS 16.5.
Is this expected behavior?
I need a simple text-to-speech avatar in my iOS app. iOS already has Memojis ready to go - but I cannot find anywhere in the dev docs on how to access Memojis to use in as a tool in app development. Am I missing something? Also - can anyone point me to any resources besides the Apple docs for using AVSpeechSynthesis?
Hi everyone, I might need some help with on-device recognition. It seems that the speech recognition task will discard whatever it has transcribed after a new sentence starts (or it believes it becomes a new sentence) during a single audio session, with requiresOnDeviceRecognition is set to true.
This doesn't happen with requiresOnDeviceRecognition set to false.
System environment: macOS 14 with Xcode 15, deploying to iOS 17
Thank you all!
I want to add shortcut and Siri support using the new AppIntents framework. Running my intent using shortcuts or from spotlight works fine, as the touch based UI for the disambiguation is shown. However, when I ask Siri to perform this action, she gets into a loop of asking me the question to set the parameter.
My AppIntent is implemented as following:
struct StartSessionIntent: AppIntent {
static var title: LocalizedStringResource = "start_recording"
@Parameter(title: "activity", requestValueDialog: IntentDialog("which_activity"))
var activity: ActivityEntity
@MainActor
func perform() async throws -> some IntentResult & ProvidesDialog {
let activityToSelect: ActivityEntity = self.activity
guard let selectedActivity = Activity[activityToSelect.name] else {
return .result(dialog: "activity_not_found")
}
...
return .result(dialog: "recording_started \(selectedActivity.name.localized())")
}
}
The ActivityEntity is implemented like this:
struct ActivityEntity: AppEntity {
static var typeDisplayRepresentation = TypeDisplayRepresentation(name: "activity")
typealias DefaultQuery = MobilityActivityQuery
static var defaultQuery: MobilityActivityQuery = MobilityActivityQuery()
var id: String
var name: String
var icon: String
var displayRepresentation: DisplayRepresentation {
DisplayRepresentation(title: "\(self.name.localized())", image: .init(systemName: self.icon))
}
}
struct MobilityActivityQuery: EntityQuery {
func entities(for identifiers: [String]) async throws -> [ActivityEntity] {
Activity.all()?.compactMap({ activity in
identifiers.contains(where: { $0 == activity.name }) ? ActivityEntity(id: activity.name, name: activity.name, icon: activity.icon) : nil
}) ?? []
}
func suggestedEntities() async throws -> [ActivityEntity] {
Activity.all()?.compactMap({ activity in
ActivityEntity(id: activity.name, name: activity.name, icon: activity.icon)
}) ?? []
}
}
Has anyone an idea what might be causing this and how I can fix this behavior? Thanks in advance
I see a lot of crashes on iOS 17 beta regarding some problem of "Text To Speech". Does anybody has a clue why TTS crashes? Anybody else seeing the same problem?
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x000000037f729380
Exception Codes: 0x0000000000000001, 0x000000037f729380
VM Region Info: 0x37f729380 is not in any region. Bytes after previous region: 3748828033 Bytes before following region: 52622617728
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
MALLOC_NANO 280000000-2a0000000 [512.0M] rw-/rwx SM=PRV
---> GAP OF 0xd20000000 BYTES
commpage (reserved) fc0000000-1000000000 [ 1.0G] ---/--- SM=NUL ...(unallocated)
Termination Reason: SIGNAL 11 Segmentation fault: 11
Terminating Process: exc handler [36389]
Triggered by Thread: 9
.....
Thread 9 name:
Thread 9 Crashed:
0 libobjc.A.dylib 0x000000019eeff248 objc_retain_x8 + 16
1 AudioToolboxCore 0x00000001b2da9d80 auoop::RenderPipeUser::~RenderPipeUser() + 112 (AUOOPRenderPipePool.mm:400)
2 AudioToolboxCore 0x00000001b2e110b4 -[AUAudioUnit_XPC internalDeallocateRenderResources] + 92 (AUAudioUnit_XPC.mm:904)
3 AVFAudio 0x00000001bfa4cc04 AUInterfaceBaseV3::Uninitialize() + 60 (AUInterface.mm:524)
4 AVFAudio 0x00000001bfa894bc AVAudioEngineGraph::PerformCommand(AUGraphNodeBaseV3&, AVAudioEngineGraph::ENodeCommand, void*, unsigned int) const + 772 (AVAudioEngineGraph.mm:3317)
5 AVFAudio 0x00000001bfa93550 AVAudioEngineGraph::_Uninitialize(NSError**) + 132 (AVAudioEngineGraph.mm:1469)
6 AVFAudio 0x00000001bfa4b50c AVAudioEngineImpl::Stop(NSError**) + 396 (AVAudioEngine.mm:1081)
7 AVFAudio 0x00000001bfa4b094 -[AVAudioEngine stop] + 48 (AVAudioEngine.mm:193)
8 TextToSpeech 0x00000001c70b3c5c __55-[TTSSynthesisProviderAudioEngine renderSpeechRequest:]_block_invoke + 1756 (TTSSynthesisProviderAudioEngine.m:613)
9 libdispatch.dylib 0x00000001ae4b0740 _dispatch_call_block_and_release + 32 (init.c:1519)
10 libdispatch.dylib 0x00000001ae4b2378 _dispatch_client_callout + 20 (object.m:560)
11 libdispatch.dylib 0x00000001ae4b990c _dispatch_lane_serial_drain + 748 (queue.c:3885)
12 libdispatch.dylib 0x00000001ae4ba470 _dispatch_lane_invoke + 432 (queue.c:3976)
13 libdispatch.dylib 0x00000001ae4c5074 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:6913)
14 libdispatch.dylib 0x00000001ae4c48e8 _dispatch_workloop_worker_thread + 404 (queue.c:6507)
...
Thread 9 crashed with ARM Thread State (64-bit):
x0: 0x0000000283309360 x1: 0x0000000000000000 x2: 0x0000000000000000 x3: 0x00000002833093c0
x4: 0x00000002833093c0 x5: 0x0000000101737740 x6: 0x0000000000000013 x7: 0x00000000ffffffff
x8: 0x0000000283309360 x9: 0x3c788942d067009a x10: 0x0000000101547000 x11: 0x0000000000000000
x12: 0x00000000000007fb x13: 0x00000000000007fd x14: 0x000000001ee24020 x15: 0x0000000000000020
x16: 0x0000b1037f729360 x17: 0x000000037f729360 x18: 0x0000000000000000 x19: 0x0000000000000000
x20: 0x00000001016a8de8 x21: 0x0000000283e21d00 x22: 0x0000000283b3f1f8 x23: 0x0000000283098000
x24: 0x00000001bfb4fc35 x25: 0x00000001bfb4fc43 x26: 0x000000028033a688 x27: 0x0000000280c93090
x28: 0x0000000000000000 fp: 0x000000016fc86490 lr: 0x00000001b2da9d80
sp: 0x000000016fc863e0 pc: 0x000000019eeff248 cpsr: 0x1000
esr: 0x92000006 (Data Abort) byte read Translation fault
XlaRuntimeError Traceback (most recent call last)
Cell In[49], line 4
1 arr = jnp.array( [7, 8, 9])
3 # Find indices where the condition is True
----> 4 indices = jnp.where(arr > 1)
6 print(indices)
XlaRuntimeError: UNKNOWN
macbook pro m2 max/ 64G / macos:13.2.1 (22D68)
import tensorflow as tf
def runMnist(device = '/device:CPU:0'):
with tf.device(device):
#tf.config.set_default_device(device)
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10)
runMnist(device = '/device:CPU:0')
runMnist(device = '/device:GPU:0')
The JAX ml_dtypes module was recently updated to 0.3.0 - as part of this change, the 'float8_e4m3b11' dtype has been deprecated, with newer versions of JAX also reflecting this change. The new ml_dtypes version now seems to be incompatible with JAX v0.4.11.
As jax-metal currently requires JAX v0.4.11, perhaps the dependencies list should be updated to include ml_dtypes==0.2.0 in order to prevent the following import error:
AttributeError: module 'ml_dtypes' has no attribute 'float8_e4m3b11'
Which essentially makes JAX unusable on import (and appears to be fixed by pip install ml_dtypes==0.2.0)
Trying to setup Tensorflow on mac M1.
conda install -c apple tensorflow-deps throwing following error:
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions following specifications were found to be incompatible with your system:
- feature:/osx-arm64::__osx==13.6=0
- tensorflow-deps -> grpcio[version='>=1.37.0,<2.0'] -> __osx[version='>=10.10|>=10.9']
Your installed version is: 13.6
The .condarc as follows:
channels:
- defaults
subdirs:
- osx-arm64
- osx-64
- noarch
ssl_verify: false
subdir: osx-arm64
And conda info:
active environment : base
active env location : /Users/mdrahman/miniconda3
shell level : 1
user config file : /Users/mdrahman/.condarc
populated config files : /Users/mdrahman/.condarc
conda version : 23.5.2
conda-build version : not installed
python version : 3.11.4.final.0
virtual packages : __archspec=1=arm64
__osx=13.6=0
__unix=0=0
base environment : /Users/mdrahman/miniconda3 (writable)
conda av data dir : /Users/mdrahman/miniconda3/etc/conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/osx-arm64
https://repo.anaconda.com/pkgs/main/osx-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/osx-arm64
https://repo.anaconda.com/pkgs/r/osx-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /Users/mdrahman/miniconda3/pkgs
/Users/mdrahman/.conda/pkgs
envs directories : /Users/mdrahman/miniconda3/envs
/Users/mdrahman/.conda/envs
platform : osx-arm64
user-agent : conda/23.5.2 requests/2.29.0 CPython/3.11.4 Darwin/22.6.0 OSX/13.6
UID:GID : 501:20
netrc file : None
offline mode : False```
Looking forward for your support.
Hi,
Are there plans to support complex numbers?
Something simple like this:
def return_complex(x):
return x*1+1.0j
x = jnp.ones((10))
print(return_complex(x))
results in an error.
I notice that some WWDC sessions have a code tab (in addition to Overview and Transcript) but this session 10176 does not. I tried what code I could see on the video but it's obviously not a complete project.
It would help if the authors of the Session 10176 video could add the code to the session.