Hi,
trying to wrap my head around Xcode's FXPlug. I already sell Final Cut Pro titles for a company. These titles were built in motion.
However, they want me to move them to an app and I'm looking for any help on how to accomplish this
*What the app should do is:
Allow users with an active subscription to our website the ability to access titles within FCPX and if they are not an active subscriber, for access to be denied.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Hi,
Could you please explain how to use SF Symbols animations in Final Cut? I would greatly appreciate your help.
Thank you!
I have generated FCPXML, but i can't figure out issue:
<?xml version="1.0"?>
<fcpxml version="1.11">
<resources>
<format id="r1" name="FFVideoFormat3840x2160p2997" frameDuration="1001/30000s" width="3840" height="2160" colorSpace="1-1-1 (Rec. 709)"/>
<asset id="video0" name="11a(1-5).mp4" start="0s" hasVideo="1" videoSources="1" duration="6.81s">
<media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/11a(1-5).mp4"/>
</asset>
<asset id="video1" name="12(4)r8 mute.mp4" start="0s" hasVideo="1" videoSources="1" duration="9.94s">
<media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/12(4)r8 mute.mp4"/>
</asset>
<asset id="video2" name="13 mute.mp4" start="0s" hasVideo="1" videoSources="1" duration="6.51s">
<media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/13 mute.mp4"/>
</asset>
<asset id="video3" name="13x (8,14,24,29,38).mp4" start="0s" hasVideo="1" videoSources="1" duration="45.55s">
<media-rep kind="original-media" src="file:///Volumes/Dropbox/RealMedia Dropbox/Real Media/Media/Test/Test AE videos, City, testOLOLO/video/13x (8,14,24,29,38).mp4"/>
</asset>
</resources>
<library>
<event name="Untitled">
<project name="Untitled Project" uid="28B2D4F3-05C4-44E7-8D0B-70A326135EDD" modDate="2024-04-17 15:44:26 -0400">
<sequence format="r1" duration="4802798/30000s" tcStart="0s" tcFormat="NDF" audioLayout="stereo" audioRate="48k">
<spine>
<asset-clip ref="video0" offset="0/10000s" name="11a(1-5).mp4" duration="0/10000s" format="r1" tcFormat="NDF"/>
<asset-clip ref="video1" offset="12119/10000s" name="12(4)r8 mute.mp4" duration="0/10000s" format="r1" tcFormat="NDF"/>
<asset-clip ref="video2" offset="22784/10000s" name="13 mute.mp4" duration="0/10000s" format="r1" tcFormat="NDF"/>
<asset-clip ref="video3" offset="34544/10000s" name="13x (8,14,24,29,38).mp4" duration="0/10000s" format="r1" tcFormat="NDF"/>
</spine>
</sequence>
</project>
</event>
</library>
</fcpxml>
Any ideas?
Hello everyone, I have been receiving this same crash report for the past month whenever I try and export a Final Cut Pro project. The FCP video will get to about 88% completion of export, then the application crashes and I get the attached report. Any leads on how to fix this would be greatly appreciated! Thank you.
-Lauren
1.在Fxplug4.3的 FxRemoteWindowAPI 的协议中,没有提供window.frame.origin的设置。
2.如果我自定义NSWindow时,在FxPlug中 [Window setLevel:NSFloatingWindowLevel];也没有执行。
3.请问我应该如何把窗口保留在Final cut pro的前面,并且不影响Final cut pro 的操作呢?
NSPanel *panel = [[myPanel alloc] initWithContentRect:NSMakeRect(100, 100, 400, 300) styleMask:NSWindowStyleMaskTitled | NSWindowStyleMaskClosable
backing:NSBackingStoreBuffered
defer:NO];
[panel setLevel:NSFloatingWindowLevel];//无效????
[panel makeKeyAndOrderFront:self];
问题:在FxPlug4.3中使用setLevel不能将panel放在Final cut pro和Mition的前面?
救命~~~全世界都没找到答案!
1.In the FxRemoteWindowAPI protocol, there is no way to set window.frame.origin.
2.When using NSWindow, you cannot set [Window setLevel:NSFloatingWindowLevel].
3.How can I keep the window in front of Final Cut Pro without affecting the normal use of Final Cut Pro?
I'm trying to create code to generate an fcpxml file so I can automate Final Cut Pro timeline (project) creation. Here's an xml element that FCP successfully imports (and successfully creates a project/timeline).
<project name="2013-08-09 19_23_07 (id).mov">
<sequence format="r1">
<spine>
<asset-clip ref="r2" offset="0s" name="2013-08-09 19_23_07 (id).mov" start="146173027/60000s" duration="871871/60000s" tcFormat="DF" audioRole="dialogue"></asset-clip>
</spine>
</sequence>
</project>
The xml element example above was generated by exporting a simple timeline with a single clip. The problem I'm having is the media asset has timecode that gives a start time in relation to the timecode. When I try to remove timecode attributes and change the start time to "0s"
<asset-clip ref="r2" offset="0s" name="2013-08-09 19_23_07 (id).mov" start="0s" duration="871871/60000s" audioRole="dialogue"></asset-clip>
FCP complains with the import error:
2013-08-09 19_23_07 (id).fcpxml Invalid edit with no respective media. (/fcpxml[1]/project[1]/sequence[1]/spine[1]/asset-clip[1])
I guess the question is, does AVAsset provide a way to get the timecode information and the timecode based start offset, or is there a way to tell FCP to use a default start time independent of timecode?
Hello,
First, some version and software details:
Software: iOS 18.1
Hardware: iPhone 14 Pro Max and later
Xcode: 16.0
Summary: AVAssetReader is not concatenating a video at the beginning of the output video. The output video should contain a scene of me introducing the content, followed by a blue screen with AVSpeechSynthesizer reading out a text that I pasted above the "Generate Video" button.
Details:
Now, let's talk about the app.
Basically, I’m developing an app that generates a video with the following features:
My app will create an output video that is split into an opening scene followed by a fully blue screen.
The opening scene will be taken from a video I choose from my gallery.
I will read the opening video using AVAssetReader as usual.
After the opening scene, I will use the content of a text read by AVSpeechSynthesizer.write().
After the opening scene, the synthesized audio will start playing while the blue screen is displayed.
All of this is already defined in the attached project.
Each project file has a comment at the beginning introducing its content.
How to test:
Write something in the field above the "Generate Video" button. For example, type "Hello, world!"
Then, press the "Library" button and select a video from the gallery, about 30 seconds long.
That’s it. Press the "Generate Video" button.
The result I’ve experienced is a crash or failure to generate the video.
Practical example of what I want to achieve:
Suppose I record a 30-second video where I say, "I’m going to tell you the story of Snow White."
Then, I paste the "Snow White" story into the field above the "Generate Video" button.
The output video should contain me saying, "I’m going to tell you the story of Snow White."
After that, the AVSpeechSynthesizer will read the story I pasted, while displaying a blue screen.
I look forward to a solution.
Thank you very much!
convertToCMSampleBuffer.swift
convertToPixelBuffer.swift
createInputs.swift
createVideo.swift
test.swift
saveVideo.swift
TestApp.swift
editingVideo.swift
sampleReaderProvider.swift
misc.swift
sampleProvider.swift
xcode 16:
VT_EXPORT void
VT_EXPORT OSStatus
VTPixelTransferSessionCreate(
CM_NULLABLE CFAllocatorRef allocator,
CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTPixelTransferSessionRef * CM_NONNULL pixelTransferSessionOut) API_AVAILABLE(macos(10.8), ios(16.0), tvos(16.0), visionos(1.0)) API_UNAVAILABLE(watchos);
xcode 15:
VT_EXPORT OSStatus
VTPixelTransferSessionCreate(
CM_NULLABLE CFAllocatorRef allocator,
CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTPixelTransferSessionRef * CM_NONNULL pixelTransferSessionOut) VT_AVAILABLE_STARTING(10_8);
At Apple Developer documentation, https://developer.apple.com/documentation/avfoundation/avcapturedevice/discoverysession you can find the sentence
You can also key-value observe this property to monitor changes to the list of available devices.
But how to use it?
I tried it with the code above and tested on my MacBook with EarPods.
When I disconnect the EarPods, nothing was happened.
MacBook Air M2
macOS Sequoia 15.0.1
Xcode 16.0
import Foundation
import AVFoundation
let discovery_session = AVCaptureDevice.DiscoverySession.init(deviceTypes: [.microphone], mediaType: .audio, position: .unspecified)
let devices = discovery_session.devices
for device in devices {
print(device.localizedName)
}
let device = devices[0]
let observer = Observer()
discovery_session.addObserver(observer, forKeyPath: "devices", options: [.new, .old], context: nil)
let input = try! AVCaptureDeviceInput(device: device)
let queue = DispatchQueue(label: "queue")
var output = AVCaptureAudioDataOutput()
let delegate = OutputDelegate()
output.setSampleBufferDelegate(delegate, queue: queue)
var session = AVCaptureSession()
session.beginConfiguration()
session.addInput(input)
session.addOutput(output)
session.commitConfiguration()
session.startRunning()
let group = DispatchGroup()
let q = DispatchQueue(label: "", attributes: .concurrent)
q.async(group: group, execute: DispatchWorkItem() {
sleep(10)
session.stopRunning()
})
_ = group.wait(timeout: .distantFuture)
class Observer: NSObject {
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
print("Change")
if keyPath == "devices" {
if let newDevices = change?[.newKey] as? [AVCaptureDevice] {
print("New devices: \(newDevices.map { $0.localizedName })")
}
if let oldDevices = change?[.oldKey] as? [AVCaptureDevice] {
print("Old devices: \(oldDevices.map { $0.localizedName })")
}
}
}
}
class OutputDelegate : NSObject, AVCaptureAudioDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
print("Output")
}
}
Good afternoon
since I’ve installed ios 18 on me iphone 15 pro I have problems using Apple car play with my Ford Puma with Sync 3. More in detail, problems with audio commands, selecting audio track, bluetooth, etc..
Are you aware about it?
Thanks a lot
Regards
Alberto
I have a recent post kind of outlining a similar question here. This time though I'm confident that inserting an array of Track works when inserting into the ApplicationMusicPlayer.shared.queue but now I'm not sure how I can initialize the queue to display song title and artwork for example. I'm also not sure how to get the current item in the queue's artist information and album information which I feel should be easy to do so maybe I'm missing something obvious. Hope this paints of what I'm trying to do and I'm going to post the neccessary code here to help me debug/figure out this problem.
import SwiftUI
import MusicKit
struct PlayBackView: View {
@Environment(\.scenePhase) var scenePhase
@Environment(\.openURL) private var openURL
// Adding Enum Here for Question Sake
enum PlayState {
case play
case pause
}
@State var song: Track
@Binding var songs: [Track]?
@State var isShuffled = false
@State private var playState: PlayState = .pause
@State private var songTimer: Int = Int.random(in: 5...30)
@State private var roundTimer: Int = 5
@State private var isTimerActive = false
// @State private var volumeValue = VolumeObserver()
@State private var isFirstPlay = true
@State private var isDancing = false
@State private var player = ApplicationMusicPlayer.shared
private var isPlaying: Bool {
return (player.state.playbackStatus == .playing)
}
let timer = Timer.publish(every: 1, on: .main, in: .common).autoconnect()
var playPauseImage: String {
switch playState {
case .play:
"pause.fill"
case .pause:
"play.fill"
}
}
var body: some View {
VStack {
// Album Cover
HStack(spacing: 20) {
if let artwork = player.queue.currentEntry?.artwork {
ArtworkImage(artwork, height: 100)
} else {
Image(systemName: "music.note")
.resizable()
.frame(width: 100, height: 100)
}
VStack(alignment: .leading) {
/*
This is where I want to display song title, album title, and artist name for example
*/
// Song Title
Text(player.queue.currentEntry?.title ?? "Unable to Find Song Title")
.font(.title)
.fixedSize(horizontal: false, vertical: true)
// Album Title
// Text(player.queue.currentEntry ?? "Album Title Not Found")
// .font(.caption)
// .fixedSize(horizontal: false, vertical: true)
// I don't know what the subtitle actually grabs
Text(player.queue.currentEntry?.subtitle ?? "Artist Name Not Found")
.font(.caption)
}
}
.padding()
// Play/Pause Button
Button(action: {
handlePlayButton()
isFirstPlay = false
}, label: {
Text(playState == .play ? "Pause" : isFirstPlay ? "Play" : "Resume")
.frame(maxWidth: .infinity)
})
.buttonStyle(.borderedProminent)
.padding()
.font(.largeTitle)
.tint(.red)
}
.padding()
// Maybe I should use the `.task` modifier here?
.onAppear {
// I'm sure this code could be improved but don't think it'll help answer the question at the moment.
Task {
if let songs = songs {
do {
if isShuffled {
let shuffledSongs = songs.shuffled()
try await player.queue.insert(shuffledSongs, position: .tail)
handlePlayButton()
} else {
try await player.queue.insert(songs, position: .tail)
}
} catch {
print(error.localizedDescription)
}
}
}
}
}
private func handlePlayButton() {
Task {
if isPlaying {
player.pause()
playState = .pause
isTimerActive = false
} else {
playState = .play
await playTrack()
isTimerActive = true
}
}
}
@MainActor
private func playTrack() async {
do {
try await player.play()
} catch {
print(error.localizedDescription)
}
}
}
//#Preview {
// PlayBackView()
//}
Hello everyone, I am using QRCodeScanner library in my project, the scan qr code was working in earlier ipad os but now in iPad os 18 it's stopped working.
Case-ID: 9391388
Our application uses timed Metadata as part of a rating control system.
We noticed a problem in production and diagnosis shows that we stop receiving timed Metadata on iOS18 only
Our live streams are primed with metadata at least once per second but we are seeing extended gaps in receiving this content, in excess of 10 minutes.
We have also observed that this happens more as the player climbs the bitrate ladder, and doesn't happen if we cap to a low resolution
i.e. a preferredMaximumResolution of 768x432.
Furthermore, if we throttle network conditions after we stop receiving metadata the we start receiving them again.
Following is a simple example that demonstrates the above behaviour, unfortunately I cannot share the live stream endpoint which is primed with metadata publicly, but can provide privately to Apple to reproduce the problem.
import UIKit
import AVKit
class ViewController: UIViewController, AVPlayerItemMetadataOutputPushDelegate {
var player: AVPlayer?
var itemMetadataOutput: AVPlayerItemMetadataOutput?
override func viewDidAppear(_ animated: Bool) {
guard let url = URL(string: "endpoint redacted") else {
return
}
let player = AVPlayer(url: url)
let controller = AVPlayerViewController()
controller.player = player
self.player = player
present(controller, animated: true) {
player.play()
let currentItem = player.currentItem
let itemMetadataOutput = AVPlayerItemMetadataOutput(identifiers: nil)
self.itemMetadataOutput = itemMetadataOutput
self.itemMetadataOutput?.setDelegate(self, queue: .main)
currentItem?.add(itemMetadataOutput)
}
}
public func metadataOutput(_ output: AVPlayerItemMetadataOutput,
didOutputTimedMetadataGroups groups: [AVTimedMetadataGroup],
from track: AVPlayerItemTrack?) {
print("received metadata \(Date())")
}
}
I’m currently working on an iOS project that involves loading and playing stereoscopic/spatial videos. I’m using the AVFoundation framework, specifically AVURLAsset, but I’m having trouble determining how to correctly load and handle stereoscopic videos.
I would like to know:
Any guidance or code snippets would be greatly appreciated, I´m not understanding pretty well the apple developer videos...
Thank you in advance for your help!
Best,
Lau
Hi,
In my app I am using MusicLibraryRequest<Artist> to fetch all of the artists in someone's Library collection. With this response I then fetch each artists albums: artist.with([.album]).
The response from this only gives albums in the users Library collection. I would like to augment it with all of the albums for an artist from the full catalogue.
I'm using MusicKit and targeting iOS18 and visionOS 2.
Could someone please point me towards the best way to approach this?
After investing more than a week into getting a bunch of audio unit projects converted into app + appex + framework, they all are now correctly loaded in-process in the demo host app that is part of Xcode's template.
However, Logic Pro adamantly refuses to load them in-process.
Does Logic Pro simply not do that ever, or is there some hint or configuration my plugins need to provide to enable that? If it is unsupported, will it be supported in some future version of Logic?
The entire point of investing that week was performance, which is moot if it is impossible to test the impact of loading in-process in a real-world usage scenario.
I cannot mirror or extend my screen from mac mini m2 to iPad 10 gen. Whenever I click on "mirror or extend screen" my external display for mac refreshes after showing "no signal" and comes back on meanwhile my iPad locks out and screen mirror or extending is unsuccessful. But I can mirror my iPad screen to mac mini m2. Earlier everything was working, suddenly it is not working
Is anyone developing a way for users to control an iOS or PadOS device playing Apple Music to a DAC via USB to amp from another iOS or PadOS device wirelessly? Specifically, full control. Not Accessibility, not to Apple TV, not HomePods, not firmware downgraded Airport Expresses to a DAC or other hacks mentioned for the past decade this “connect” like feature has been desired by audiophiles seeking exclusive mode on a device with that (iOS/PadOS) but — control it while sitting on a couch or in a wheel chair across the room. Exclusive mode being the key feature iOS and PadOS offer that is desired with full or nearly full Apple Music control.