In my RequestHandler.swift, this is extension of MatterAddDeviceExtensionRequestHandler
After commission device is completion.
I call getBaseDevice method in the MTRDeviceController.
func controller(_ controller: MTRDeviceController, commissioningComplete error: Error?, nodeID: NSNumber?) {
if error != nil {
os_log(.default, "TrinhVM: commissioningComplete error -> \(error!.localizedDescription)")
} else {
os_log(.default, "TrinhVM: commissioningComplete ->\(nodeID)")
chipController.getBaseDevice(1, queue: DispatchQueue.main, completionHandler: { chipDevice, error in
if chipDevice == nil {
os_log(.debug, "Status: Failed to establish a connection with the device")
} else {
os_log(.error, "Status: Success to establish a connection with the device: \(chipDevice?.description ?? "")")
let onOff = MTRBaseClusterOnOff(device: chipDevice, endpointID: 1, queue: DispatchQueue.main)
// Send the "on" command to the device
onOff?.on { error in
let resultString: String
if let error = error {
resultString = String(format: "An error occurred: 0x%02lx", error._code)
} else {
resultString = "On command success"
}
debugPrint(resultString)
}
}
})
}
}
It's working well, the status is always "Status: Success to establish a connection with the device".
And I can control the lightbulb here with chipDevice (chipDevice is MTRBaseDevice).
But, after the commission device has finished in the extension, get back the application scheme.
I call method:
chipController.getBaseDevice(1, queue: DispatchQueue.main, completionHandler: { chipDevice, error in
if chipDevice == nil {
os_log(.debug, "Status: Failed to establish a connection with the device")
} else {
os_log(.error, "Status: Success to establish a connection with the device: \(chipDevice?.description ?? "")")
let onOff = MTRBaseClusterOnOff(device: chipDevice, endpointID: 1, queue: DispatchQueue.main)
// Send the "on" command to the device
onOff?.on { error in
let resultString: String
if let error = error {
resultString = String(format: "An error occurred: 0x%02lx", error._code)
} else {
resultString = "On command success"
}
debugPrint(resultString)
}
}
})
It's always show timeout error .
Mdns: Resolve failure (src/platform/Darwin/DnssdImpl.cpp:476: CHIP Error 0x00000074: The operation has been cancelled)
OperationalSessionSetup[1:0000000000000015]: operational discovery failed: src/lib/address_resolve/AddressResolve_DefaultImpl.cpp:119: CHIP Error 0x00000032: Timeout
Creating NSError from src/lib/address_resolve/AddressResolve_DefaultImpl.cpp:119: CHIP Error 0x00000032: Timeout (context: (null))
"Failed to establish a connection with the device Optional(Error Domain=MTRErrorDomain Code=9 \"Transaction timed out.\" UserInfo={NSLocalizedDescription=Transaction timed out.})"
I don't know why the same method, works in RequestHandler.swift but not in AppScheme.
Any support for this issue.
Thank and best regards.
Hardware
RSS for tagDelve into the physical components of Apple devices, including processors, memory, storage, and their interaction with the software.
Post
Replies
Boosts
Views
Activity
Since i updated to iOS 18, im losing battery when not using it.
Will AI be available for iPhone 14?
I'm working on an app that uses HomeKit-enabled accessories such as wall plugs to control a photographic enlarger in a darkroom. This requires precise timing when toggling the power state of the plug. For example, the timer in my app might be set to 5 seconds, so I turn on the plug using writeValue() on the plugs power state characteristic, wait 5 seconds, then turn it off again.
I want to be able to measure if the plug actually responded to the command and if there was any delay in the plug actually turning on/off so I can display a warning to the user if network latency resulted in the plug being on for longer than the set time.
Does writeValue() (in my case, the async/await version) only return when the plug has been turned on, or does it return as soon as the command has been sent, regardless of if it has been received/acted on? Is there a way I can (a) quickly verify that the plug has been turned on/off, and (b) measure how long it took for the plug to act on the request, that is, the time elapsed between when writeValue() is called and when the plug actually updates to the corresponding value?
I confirmed that I had successfully turned on the device and registered the callback method,
but the callback method was not successfully invoked when moving with Magic mouse.
What method does the Magic Mouse and Magic Trackpad need to use to get the input value of the device?
IOReturn ioReturn = IOHIDDeviceOpen(deviceRef, kIOHIDOptionsTypeNone);
if ( kIOReturnSuccess == ioReturn ) {
IOHIDDeviceRegisterInputValueCallback(deviceRef, mouseInputValueCallback, NULL);
}
static void mouseInputValueCallback(void *context, IOReturn result, void *sender, IOHIDValueRef value) {
NSLog(@"handle input value");
}
I passed the __IOHIDDeviceCopyMatchingInputElements get magic mouse input element
<__NSArrayM 0x6000012c4900>(
timestamp: 0 type: 1 usagePage: 1 usage: 48 reportID: 16 value: 0,
timestamp: 0 type: 1 usagePage: 1 usage: 49 reportID: 16 value: 0,
timestamp: 0 type: 2 usagePage: 9 usage: 1 reportID: 16 value: 0,
timestamp: 0 type: 2 usagePage: 9 usage: 2 reportID: 16 value: 0
)
(NSArray *)getHIDReprots {
if (!deviceRef) {
return nil;
}
NSMutableArray *reports = [[NSMutableArray alloc] init];
CFArrayRef elements;
CFIndex i;
elements = IOHIDDeviceCopyMatchingElements(deviceRef, NULL, kIOHIDOptionsTypeNone);
for (i = 0; i<CFArrayGetCount(elements); i++) {
const IOHIDElementRef element = (void*)CFArrayGetValueAtIndex(elements, i);
IOHIDElementType eleType = IOHIDElementGetType(element);
NSString *eleTypeStr = @"";
switch (eleType) {
case kIOHIDElementTypeInput_Misc:
eleTypeStr = @"kIOHIDElementTypeInput_Misc";
break;
case kIOHIDElementTypeInput_Button:
eleTypeStr = @"kIOHIDElementTypeInput_Button";
break;
case kIOHIDElementTypeInput_Axis:
eleTypeStr = @"kIOHIDElementTypeInput_Axis";
break;
case kIOHIDElementTypeInput_ScanCodes:
eleTypeStr = @"kIOHIDElementTypeInput_ScanCodes";
break;
case kIOHIDElementTypeInput_NULL:
eleTypeStr = @"kIOHIDElementTypeInput_NULL";
break;
case kIOHIDElementTypeOutput:
eleTypeStr = @"kIOHIDElementTypeOutput";
break;
case kIOHIDElementTypeFeature:
eleTypeStr = @"kIOHIDElementTypeFeature";
break;
case kIOHIDElementTypeCollection:
eleTypeStr = @"kIOHIDElementTypeCollection";
break;
default:
break;
}
uint32_t page = IOHIDElementGetUsagePage(element);
uint32_t usage = IOHIDElementGetUsage(element);
uint32_t reportID = IOHIDElementGetReportID(element);
uint32_t reportSize = IOHIDElementGetReportSize(element);
uint32_t reportCount = IOHIDElementGetReportCount(element);
NSString *elementStr = [[NSString alloc] initWithFormat:@" reportID:%d, reportSize:%d, type:%@, UsagePage:%d, usage:%d, reportCount:%d\n\n",reportID, reportSize, eleTypeStr, page, usage, reportCount];
[reports addObject:elementStr];
}
return reports;
}
reportID:0, reportSize:0, type:kIOHIDElementTypeCollection, UsagePage:1, usage:2, reportCount:1
reportID:0, reportSize:0, type:kIOHIDElementTypeCollection, UsagePage:1, usage:1, reportCount:1
reportID:16, reportSize:16, type:kIOHIDElementTypeInput_Misc, UsagePage:1, usage:48, reportCount:1
reportID:16, reportSize:16, type:kIOHIDElementTypeInput_Misc, UsagePage:1, usage:49, reportCount:1
reportID:16, reportSize:1, type:kIOHIDElementTypeInput_Button, UsagePage:9, usage:1, reportCount:1
reportID:16, reportSize:1, type:kIOHIDElementTypeInput_Button, UsagePage:9, usage:2, reportCount:1
reportID:71, reportSize:8, type:kIOHIDElementTypeFeature, UsagePage:6, usage:32, reportCount:1
reportID:85, reportSize:512, type:kIOHIDElementTypeFeature, UsagePage:65282, usage:85, reportCount:64
reportID:0, reportSize:0, type:kIOHIDElementTypeInput_NULL, UsagePage:0, usage:-1, reportCount:1
reportID:0, reportSize:1, type:kIOHIDElementTypeInput_NULL, UsagePage:0, usage:0, reportCount:1
reportID:16, reportSize:0, type:kIOHIDElementTypeInput_NULL, UsagePage:0, usage:-1, reportCount:1
reportID:16, reportSize:1, type:kIOHIDElementTypeInput_NULL, UsagePage:0, usage:0, reportCount:1
reportID:71, reportSize:0, type:kIOHIDElementTypeInput_NULL, UsagePage:0, usage:-1, reportCount:1
reportID:71, reportSize:1, type:kIOHIDElementTypeInput_NULL, UsagePage:0, usage:0, reportCount:1
reportID:85, reportSize:0, type:kIOHIDElementTypeInput_NULL, UsagePage:0, usage:-1, reportCount:1
reportID:85, reportSize:1, type:kIOHIDElementTypeInput_NULL, UsagePage:0, usage:0, reportCount:1
Devices:
VendorID ProductID LocationID UsagePage Usage RegistryID Transport Class Product UserClass Built-In
0x4c 0x265 0x3bab3ec3 1 2 0x100001ec4 Bluetooth AppleHSBluetoothHIDDriver Magic Trackpad (null) 0
0x4c 0x265 0x3bab3ec3 0 1 0x100001eb2 Bluetooth AppleHSBluetoothDevice Magic Trackpad (null) 0
0x5ac 0x30d 0x5abbb32b 1 2 0x100001221 Bluetooth BNBMouseDevice Magic Mouse (null) 0
0x4c 0x265 0x3bab3ec3 65280 3 0x100001ed2 Bluetooth AppleHSBluetoothHIDDriver Magic Trackpad (null) 0
0x4c 0x265 0x3bab3ec3 65280 13 0x100001ecc Bluetooth AppleHSBluetoothHIDDriver Magic Trackpad (null) 0
0x4c 0x265 0x3bab3ec3 65280 11 0x100001eba Bluetooth AppleHSBluetoothHIDDriver Magic Trackpad (null) 0
We have observed a significant increase in download time from the sensors to the mobile device after recent OS updates. We are connected to the external sensors via the BLE interface using the following connection
parameters(15,15,0,6000)
https://mbientlab.com/tutorials/MetaMotionRL.html:
example 1.For 1 Meter with duration of 28 Seconds the IOS 17.1 taking 44 secs where as IOS 17.5.1 taking 82 secs .
2..For 1 Meter with duration of 45 Seconds the IOS 17.1 taking 74 secs where as IOS 17.5.1 taking 143 secs .
Even with the same connection parameters, download times were considerably lower in iOS 15 and below devices.
We are currently using the connection parameters 15, 15, 0, 6000.
I have learned from some documents that the minimum connection interval was changed to 20, but when I tried it, the download time increased further. I am seeking assistance on how to achieve the same download times as the older versions.
https://mbientlab.com/community/discussion/comment/11852#Comment_11852
Hi there,
I have 10 M1 Mac minis deployed in an exhibition. For 3 years they ran fine with exhibition power turning off at 6pm and on at 9pm. Suddenly 3 stopped booting after power loss. Now another one.
I have retrieved them here an no matter what I do it won't boot after power loss? Any ideas?
Have done full wipe and reinstall. Running latest OS. Replaced CMOS battery. Nothing helps.
Help!!!
I have a small application (already published to the Mac App Store) to monitor, and notify the user of some important state changes about thier computer. To display Wi-Fi informations, I listening on the "State:/Network/Interface/.*/AirPort" events of the System Configuration framework. Before 14.4, the returned dictionary had an "SSID_STR" property which was contained the connected network name, but from 14.4, if "SSID_STR" are presents, it's will be always an empty string ("").
My first idea was to put "NSLocationUsageDescription" into the Info.plist (and enable Location under App Sandbox in the Signing & Capabilities section) and try to request permission with the Core Locations requestWhenInUseAuthorization call, but even if i had the correct permission (.authorizedAlways), this will did not came to success.
There are any workaround or entitlement available to get back the access to this information?
Here is my write NDEF message code:
let readerWriter = NFCReaderWriter.sharedInstance()
let payloadData = NFCNDEFPayload(format: .nfcWellKnown, type: "U".data(using: .utf8)!, identifier: Data(), payload: "google.com".data(using: .utf8)!)
let message = NFCNDEFMessage(records: [payloadData])
readerWriter.write(message, to: tags.first!) { (error) in
if let err = error {
print("ERR:\(err)")
} else {
print("write success")
}
self.readerWriter.end()
}
First Issue:
When I want to read the NFC chip in the background without opening the app, I find the detection speed is not ideal. For example, this code writes "google.com" this time. When performing background reading, it does not successfully read every time the chip is near, and there seems to be a one-minute interval.
Second Issue:
When I implemented this method in the AppDelegate:
- (BOOL)application:(UIApplication *)application continueUserActivity:(nonnull NSUserActivity *)userActivity restorationHandler:(nonnull void (^)(NSArray<id<UIUserActivityRestoring>> * _Nullable))restorationHandler {
if (![userActivity.activityType isEqualToString:NSUserActivityTypeBrowsingWeb]) {
return NO;
}
NSLog(@"userActivity: %@", userActivity.ndefMessagePayload);
if (userActivity.ndefMessagePayload) {
return YES;
}
return NO;
}
I found that userActivity.ndefMessagePayload outputs null. At the same time, looking at its private variables, it seems to have values:
Printing description of userActivity->_internal->_payloadDataCache:
{
"com.apple.corenfc.useractivity.ndefmessagepayload" = {length = 546, bytes = 0x62706c69 73743030 d4010203 04050607 ... 00000000 0000019c };
}
If ndefMessagePayload is null, how can I determine that it is an NSUserActivity from an NFC scan?
Hi there! folks. Hope you are fine
We want our two applications to listen to connection and disconnection notifications in the External accessory when we connect a device to USB
These application are running in the background and we need to know when the device connects and disconnects from USB. In those cases, we have configured our apps to listen to local notifications as follows:
EAAccessoryManager.shared().registerForLocalNotifications()
NotificationCenter.default.addObserver(self, selector: #selector(didConnectAccessory(_:)), name: Notification.Name.EAAccessoryDidConnect, object: nil)
NotificationCenter.default.addObserver(self, selector: #selector(didDisconnectAccessory(_:)), name: Notification.Name.EAAccessoryDidDisconnect, object: nil)
When the apps are running in foreground, everything works correctly. On the other hand, when the apps are running in the background mode and with the external accessory background mode enabled, the disconnection or connection event is not sent to the applications, causing them to not be able to initialize correctly.
We attach traces of the two applications running in the background in a connection event through the USB.
in The first application the Notification event is received correctly.
In The second application the Notification event is NOT received correctly
We would like to investigate with you what may be happening.
We have opened a case in the feedback assistant (FB13800710) so you can investigate further. We open the case here so that other people can collaborate with us in depth. Thank you so much
Looking forward to hearing from your side.
All the best!
LogsApplication
Reproducible on iOS 17.4.1 (maybe before) & iOS 17.5. Maybe iOS 17.4 but I can't test it.
NSMotionUsageDescription is correctly set (and has always been)
Fitness activity & motion authorization are correctly enabled
The delivery for absolute altitude changes became super slow, and might be inaccurate. The only value I get is exactly the same as the GPS altitude. The accelerometer data does not seem to be taken into account anymore.
This critical bug has broken two apps of mine.
How could I quickly solve this?
Thank you!
PS: code is dead simple
let operationQueue = OperationQueue()
self.altimeter.startAbsoluteAltitudeUpdates(to: operationQueue) { [weak self] (data, error) in
guard let self = self else { return }
guard let data else { return }
DispatchQueue.main.async { // Use this value for display
self.altitude = Measurement(value: data.altitude, unit: UnitLength.meters)
/* DEBUG VIEW */
self.updateDebugView(with: data.altitude)
}
}
In our third party SDK we would like to use microphone (as optional feature) in case the hosting app allows it.
From the docs requestRecordPermission will crash if no NSMicrophoneUsageDescription exists in the hosting app info.plist.
Obviously I don't want to crash the app. I would like to check if the hosting app will allow me to call requestRecordPermission before calling it?
Is it possible
Hello
Please can you tell me if this 'warning' was given because of a Translocation Error?
Howard Oakley, the developer of Mints, will not talk to me about this. The image has not been tampered with in any way.
Your advice will be most welcome.
Thanks.
We have a relatively simple app that using Network.Framework, NWConnection, NWEndpoint to setup TCP connections with nearby devices also using the app. It's actually been working great for a while now but we've recently noticed with iOS 17.4/17.4.1 that we're spontaneously getting:
nw_proto_tcp_route_init [C6:3] no mtu received
sometimes the [C6:3] will be [C7:3] or another similar code. We may also occasionally see No route to Host appear in our console logs though this isn't definite. After this point the connection is effectively lost but we don't actually receive any updates on our NWConnection stateUpdateHandler to action on. It's sort of dead in the water so to speak.
We've reproduced this issue with multiple devices on iOS 17.4.x and in multiple network settings (in office, cafe, home networks...etc). Nothing seems to make a difference. Any ideas on how to fix or workaround this?
I saw a similar issue here: https://developer.apple.com/forums/thread/669519 but the original author never followed up and it's around 3 years old. I've captured a sysdiagnose log and can submit an issue if it warrants filing a bug report.
Just got my hands on the Apple Pencil Pro, and was looking forward to being able to add tactile feedback to my apps through the Pencil.
Apple appears to have updated their docs to make mention of the possibility of haptic feedback via the pencil and makes vague references to a certain SwiftUI modifier, but doesn't give any pencil-specific guidelines, code or speak about its capabilities at all. I'd like to ask if its possible to use CoreHaptic to queue custom haptic feedback (as of now the standard code which works on iPhone doesn't seem to work on an iPad with a paired Pencil Pro), or if that's not possible if there are any updated resources/example code for triggering predefined Pencil Pro haptics.
I bought my 13 pm in Dec 2021. Not only usage of 1.5 years but my phone experienced the white screen of death during the mid of 2023. I had to pay 150+ to repair it. Not even a year later i experienced the black screen of death which turned to green screen. I bought it to the authorized apple service provider and they told me as long as my screen shows a green screen i could get it fix for free. But when I came back the next day they told me that they were not sure if i could get it fix for free and if i rejected placing the screen which cause 500sgd u have to pay 65 sgd serivce fee. But when i went for the first time they didnt say anything about it? Its a scam to be honest. My phone is in good condition flawless with no scratches. Have no overheating or liquid problem and this happen to me. I previously had 8p and use it for 5 years and there was no need for repair. After 2.5 years of using 13pm it experience so much problems. I love apple but im dissapointed. I spend a total of 2k plus on the phone and repairs.
Hy my name is zubair and i a blogger. I have recently create a website on gold price in oman url omangoldprices.com I maked a well design header footer and every thing as compared with my competitors but the header of my website disabled and not working on Macbook and macbook pro and the logo does not showing on IOS version Iphone 6 12.6.1. I am very comfused about that can anyone help me in?
Hello,
I utilize Microsoft MSSQL (mcr.microsoft.com/mssql/server:2022-latest) on my M3 MacBook Pro. I have been running this Docker image with platform emulation set to linux/amd64 without issue for several months. After upgrading to Sonoma 14.5, I have been facing poor performance with Rosetta emulation on my Mac. Things that worked prior to the update no longer work. Cross post of: MSSQL Docker Github issue.
Any support is appreciated. Thank you!
Hello Developer Community,
I hope you all are doing well. We need your help to find out about one issue in Mac application deployment outside the App Store. As required notarization, if we want to release the product outside the App Store. We have built the application in ElectronJS and signed it with Developer ID Installer. The next step is notarization, where we get the issue. It says, "Team is not yet configured for notarization." We raised the problem with the Apple team 6 months ago and are still getting the same response from them: "Our engineering team is working on it." without having a timeline. I want to confirm if someone has had the same issue, how long it can take to resolve this, or if you have any solutions. Your support means a lot to us. Thanks.
Dhiren Patel
Almost everyone I know absolutely HATES the macmail V10 update in big sur. So many conveniences and ease of operation that were available in 10.11.16 are gone.
The icons are greyed out, the column layout is abysmal, the search engine does not function, way too many additional steps are now required to do the same job the V3 did with ease..
The standard question I hear is WHY would apple change what worked perfectly for so many and make a great mail system user unfriendly? There is an old adage that I keep hearing repeated "If it ain't broke - don't fix it"!
The question I keep hearing is can anyone figure out a way to remove V10 from big sure and revert the mail system back to V3? There should be either an option or a way to revert the mail system back to what is loved by so many, without affecting the integrity of the security changes needed in big sur to make it safer.